🚀 Blackbird is here! Accelerate your API development with AI-powered Code Gen, Instant Mocking with Prod-Like Environments. 👉 Start your free trial

Back to blog
API DEVELOPMENT

Can AI build a deployable API?

Matt Voget
September 18, 2024 | 3 min read
Build a Deployable API with AI in Minutes

Yes! And can it do so in 5 minutes? Also yes!

By chaining a few concepts together it’s possible to use AI to generate a fully functional and deployable API in no time. Here’s how:

  1. Create an OAS file by chatting with an AI
  2. Use the OAS file as in input to a server-side API code generator
  3. Validate the generated API code by deploying it into a kubernetes environment
API build with AI

AI Generated OAS Files (1 Minute)

Using AI is a great way to generate an OAS file. But like any interaction with AI, you get out what you put in, which is where a little prompt engineering can go a long way. Fortunately tools like Blackbird API development have done that prompt engineering to quickly create valid OAS files.

Here’s a prompt I used in the Blackbird API Development platform to create API spec for a fictional AI Chatbot:

I want to create an API for my AI chatbot that includes multiple API endpoints. These endpoints should allow for submitting a chat message as well as getting and setting the model

{
"openapi": "3.0.1",
"info": {
"title": "AI Chatbot API",
"description": "API for interacting with an AI chatbot, allowing users to submit chat messages and manage AI models.",
"version": "1.0.0",
"contact": {
"name": "API Support",
"url": "https://support.example.com",
"email": "support@example.com"
}
},
"servers": [
{
"url": "https://api.example.com/v1",
"description": "Production server"
}
],
"tags": [
{
"name": "Chat",
"description": "Operations related to chat messages"
},
{
"name": "Model",
"description": "Operations related to AI models"
}
],
"paths": {
"/chat/messages": {
"post": {
"operationId": "submitChatMessage",
"tags": [
"Chat"
],
"summary": "Submit a chat message to the AI chatbot",
"description": "This endpoint allows users to send a message to the AI chatbot and receive a response.",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ChatMessage"
}
}
}
},
"responses": {
"200": {
"description": "Successful response with the chatbot's reply",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ChatResponse"
}
}
}
},
"404": {
"description": "Chatbot not found"
},
"500": {
"description": "Internal server error"
}
}
}
},
"/model": {
"get": {
"operationId": "getModel",
"tags": [
"Model"
],
"summary": "Retrieve the current AI model",
"description": "This endpoint retrieves the currently set AI model.",
"responses": {
"200": {
"description": "Successful response with the current model",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Model"
}
}
}
},
"404": {
"description": "Model not found"
},
"500": {
"description": "Internal server error"
}
}
},
"put": {
"operationId": "setModel",
"tags": [
"Model"
],
"summary": "Set a new AI model",
"description": "This endpoint allows users to set a new AI model.",
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Model"
}
}
}
},
"responses": {
"200": {
"description": "Successful response indicating the model has been set"
},
"404": {
"description": "Model not found"
},
"500": {
"description": "Internal server error"
}
}
}
}
},
"components": {
"schemas": {
"ChatMessage": {
"type": "object",
"required": [
"message"
],
"properties": {
"message": {
"type": "string",
"description": "The message to send to the chatbot.",
"example": "Hello, how are you?"
}
},
"additionalProperties": false
},
"ChatResponse": {
"type": "object",
"properties": {
"response": {
"type": "string",
"description": "The response from the chatbot.",
"example": "I'm doing well, thank you!"
}
},
"additionalProperties": false
},
"Model": {
"type": "object",
"required": [
"modelName"
],
"properties": {
"modelName": {
"type": "string",
"description": "The name of the AI model.",
"example": "gpt-3.5-turbo"
},
"version": {
"type": "string",
"description": "The version of the AI model.",
"example": "1.0.0"
}
},
"additionalProperties": false
}
}
}
}

Server-Side API Code Generation (2 Minutes)

Now with a valid OAS file in hand, we can leverage server-side code generators to implement the boilerplate code for this API.

While it’s easy to find a plethora of client-side code generators and SDKs, the number of OpenAPI-based server-side code generators is limited. Fortunately Blackbird has us covered again.

Using Blackbird’s CLI, we’ll take the AI Chatbot OpenAPI spec and use it as an input to Blackbird’s code generate command:

blackbird code generate -s ./ai-chatbot.json -t go -o ai-chatbot-api

This command generated the whole API project in Go, including schemas for request/responses and handling for each endpoint.

ai chatbot

Let’s test it out:

# Run the chatbot API on my localhost
cd ai-chatbot && go run cmd/ai-chatbot/main.go

Great! The API is up and running on localhost. Blackbird stubbed out the three endpoints for this API as well as the schemas. Here’s a quick look at the API’s routes (in

routes.go
):

func (h *APIHandler) GetRoutes() Routes {
return Routes{
{
"submitChatMessage",
"/v1/chat/messages",
"POST",
h.HandleSubmitChatMessage,
},{
"getModel",
"/v1/model",
"GET",
h.HandleGetModel,
},{
"setModel",
"/v1/model",
"PUT",
h.HandleSetModel,
},
}
}

With the generated endpoint handling and schema code, I can quickly implement one of the operations — I’ll choose

/v1/chat/messages
:

// This endpoint allows users to send a message to the AI chatbot and receive a response.
// Submit a chat message to the AI chatbot
func (h *APIHandler) SubmitChatMessage(ctx context.Context, reqBody ChatMessage) (Response, error) {
return NewResponse(200, ChatResponse{Response: "This is a pre-canned chat response"}, "application/json", nil), nil
// return NewResponse(404, {}, "", responseHeaders), nil
// return NewResponse(500, {}, "", responseHeaders), nil
}

It’s a rough implementation, but it’s nice to have the schemas already defined where I need them as well as stubs for the error cases (these were pulled directly from the OAS file).

Finally, let’s test this out on localhost with a quick curl command:

# Curl the /chat/messages endpoint
curl --request POST -d '{"message":"Hello chatbot!"}' http://localhost/v1/chat/messages
{
"response": "This is a pre-canned chat response"
}

Deploying the API (2 Minutes)

The generated API code “works on my machine(tm)”. But the real test is if we can get it containerized, deployed, and tested in a hosted k8s environment. Once again, Blackbird has me covered:

blackbird deployment create ai-chatbot -d ./Dockerfile -c .

With this single command, Blackbird did the following:

  • Built an image with the auto-generated Dockerfile and my API code
  • Deployed the image into Blackbird’s hosted k8s environment
  • Created the necessary mappings to provide a publicly accessible URL for testing

Let’s run our same curl command as before, but this time against the public URL where our deployment is running:

curl --request POST -d '{"message":"Hello chatbot!"}' https://matts-env-5b603.blackbird-relay.a8r.io/ai-chatbot/v1/chat/messages
{
"response": "This is a pre-canned chat response"
}

Success! In 5 minutes I went from a conversation with AI to working and deployable API code 🚀

Blackbird

Go from an AI conversation to a working, deployable API in just 5 minutes