A Simple Guide to OpenAI API Integration with Golang Boilerplate

Consuming OpenAI GPT API with Go-Chi and PostgreSQL on the Golang Boilerplate

Sigrid Jin
6 min readApr 9, 2023

OpenAI’s GPT-3.5 turbo and GPT-4 are the most advanced large language models that generates human-like text. Developers can incorporate this model into their applications through the OpenAI API. This article will explain how to add the OpenAI API to your server applications written in Golang. I put out a boilerplate that has the custom OpenAI client, the Go-chi framework for server-side functions, and PostgreSQL. Let’s take a look at how to use this.

Please be aware that the boilerplate is rigorously tested using text-davinci-003 which is widely available model options to date. The boilerplate adopts the GPT-4 by applying reversed clients on ChatGPT (see reversed.py on the repository), but please be careful of utilizing the unofficial client.

To begin with, it is essential to ensure that the environment variables are appropriately defined. This can be achieved by creating the ./config.yaml file to include the necessary details like the following.

Environment: DEV
OpenAIEnv:
API_KEY: "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" // if you are going to use the official API
ACCESS_TOKEN: "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" // if you are going to use the reverse-engineered API

The environment variable should be specified Dev or Prod to ensure that the correct environment is being utilized, according to your development stages. Also, the OpenAI environment variables, which include the API key and access token, need to be set up. If you plan to use the official OpenAI API, the API key should be specified in the API_KEY field.

On the other hand, if you plan to use the reverse-engineered API to use GPT-4 in an unofficial way, the access token of your ChatGPT account should be specified in the ACCESS_TOKEN field.

Completion

OpenAI API Completion is one of the most commonly used APIs for utilizing GPT-based language models. The template also implements other API specifications, but let us explore how to use the boilerplate by looking at the completion example specifically on the unit testing method on test/client_test.go.

Look at the test example below. (see test/client_test.go)

  • I created CompletionRequest object with the prompt "this is a test", generating a maximum of 3 tokens, and using the default model, which is GPT-3.5 turbo.
  • From the testing environment below, the setupTest method involves creating an echo.Context and an api.Handler object, which will handle the completion request.
  • The hd.CreateCompletion method is injected on the Handler object, passing in the echo.Context object. It will be explained later.
  • The unmarshalled object on the returned response is CompletionResponse which contains object which contains the generated texts from the OpenAI API.
func TestCreateCompletion(t *testing.T) {
// given
bodyTest := client.NewCompletionRequest("this is a test", 3, nil, nil)
bodyRaw, err := json.Marshal(bodyTest)
if err != nil {
t.Errorf("could not marshal request body: %v", err)
}
err, ectx, hd := setupTest(t, http.MethodPost, client.CreateCompletionEndpoint, &bodyRaw, nil)
if err != nil {
t.Errorf("could not setup test: %v", err)
}

// when
err = hd.CreateCompletion(ectx)

// then
if err != nil {
t.Errorf("could not create completion: %v", err)
}
res := ectx.Response()
if res.Status != http.StatusOK {
t.Errorf("expected status OK but got %v", res.Status)
}
bodyVerify := res.Writer.(*httptest.ResponseRecorder).Body
var completionResponse client.CompletionResponse
if err = json.Unmarshal(bodyVerify.Bytes(), &completionResponse); err != nil {
t.Errorf("could not unmarshal response: %v", err)
}
if len(completionResponse.Choices) == 0 {
t.Errorf("expected at least one completion but got %v", len(completionResponse.Choices))
}
}

Okay, I am going to briefly introduce the customized client design that is dedicated to communicating with OpenAI API. I have referred to the official documentation, PullRequestInc’s implementation and 0x9ef’s openai-go implementation. Highly recommend that you review the docs and others’ code implementation before applying the template.

// The part of API Handler on go-chi
func (hd *Handler) CreateCompletion(_ echo.Context) error {
var cr cif.CompletionRequest
if err := (*hd.ectx).Bind(&cr); err != nil {
return err
}
res, err := hd.oc.CreateCompletion((*hd.ectx).Request().Context(), cr)
if err != nil {
return err
}
return (*hd.ectx).JSON(200, res)
}

The testing method is called CreateCompletion which is on the go-chi handler. It accepts a request from the client (testing method) and binds it to a new completion request to the OpenAIClient.go.

// See internal/pkg/client/OpenAIClient.go
func (oc OpenAIClient) CreateCompletion(ctx context.Context, request client.CompletionRequest) (*client.CompletionResponse, error) {
return oc.CreateCompletionWithEngine(ctx, oc.DefaultEngine, request)
}

func (oc OpenAIClient) CreateCompletionWithEngine(ctx context.Context, _ string, request client.CompletionRequest) (*client.CompletionResponse, error) {
req, err := oc.NewRequestBuilder(ctx, http.MethodPost, client.OpenAICompletionEndPoint, request)
if err != nil {
return nil, err
}
resp, err := oc.ExecuteRequest(req)
if err != nil {
return nil, err
}
output := new(client.CompletionResponse)
if err := oc.getResponseObject(resp, output); err != nil {
return nil, err
}
return output, nil
}

The client passes the completion request to the OpenAI’s Completion API endpoints and then returns the completion response after unmarshalling it to CompletionResponse type.

// See pkg/client/completion.go
// CompletionResponse is the full response from a request to the completions API
type CompletionResponse struct {
ID string `json:"id"`
Object string `json:"object"`
Created int `json:"created"`
Model string `json:"model"`
Choices []CompletionResponseChoice `json:"choices"`
Usage CompletionResponseUsage `json:"usage"`
}

However, when a language model always chooses the token with the highest probability, the generated text can become repetitive and lack originality. To address this issue, the concept of temperature is introduced. By using temperature, the model can select tokens with lower probabilities, resulting in more diverse and creative responses.

Temperature is a value between 0 and 1, where higher values increase the likelihood of the GPT model taking risks and selecting a token with a lower probability. This approach enables the model to generate more innovative responses, but the temperature value needs to be adjusted based on the specific requirements of the service.

The default setting temperature for this implementation is 0.0, which means that the response is deterministic by the request. You can change this setting when creating a new request by NewCompletionRequest like the following.

 temperature := 0.7
bodyTest := client.NewCompletionRequest("this is a test", 3, nil, nil, temperature)
bodyRaw, err := json.Marshal(bodyTest)
if err != nil {
t.Errorf("could not marshal request body: %v", err)
}
err, ectx, hd := setupTest(t, http.MethodPost, client.CreateCompletionEndpoint, &bodyRaw, nil)
if err != nil {
t.Errorf("could not setup test: %v", err)
}

Embedding Reverse-Engineered Client written in Python

When using the customized client, I recommend utilizing Antonio Cheong’s v3 implementation, which is available on GitHub. Note that since the boilerplate was created before the release of GPT-4, you may need to add a new field to support GPT-4, or use the reversed client before implementing GPT-4.

In order to access the customized GPT client, you will need to retrieve the access key from the site and send your requests directly to ChatGPT, rather than through the official API. However, please keep in mind that response times may be slower compared to the official API.

func (hd *Handler) RunGptPythonClient(_ echo.Context) error {
accessToken, err := (*hd.oc).GetAccessToken()
if err != nil {
return err
}

var promptRaw cif.GPTPromptRequest
if err := (*hd.ectx).Bind(&promptRaw); err != nil {
return (*hd.ectx).JSON(400, err.Error())
}

prompt, err := cif.CreatePrompt(promptRaw)
if err != nil {
return err
}
promptInString := prompt.String()

result, err := exec.Command("python", "../pkg/client/ChatbotRunner.py", accessToken, promptInString).Output()
if err != nil {
return (*hd.ectx).JSON(500, err.Error())
}

responseBody := cif.GPTPromptSuccessfulResponse{
Result: string(result),
}

return (*hd.ectx).JSON(200, responseBody)
}

The following is the ChatbotRunner.py, which is called by the Command module in the exec package for the sake of running a Python client from the Golang client.

import argparse

from Chatbot import Chatbot


def main(access_token, prompt):
chatbot = Chatbot(config={
"access_token": access_token
})

response = ""

for data in chatbot.ask(prompt):
response = data["message"]

print(response)

return response


if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("access_token", help="access token for the chatbot")
parser.add_argument("prompt", help="prompt to pass to the chatbot")
args = parser.parse_args()

response = main(args.access_token, args.prompt)

I would greatly appreciate your contributions if you are interested in further developing this public template. As we look at similar projects in the Python and JavaScript ecosystems, such as LangChain, and in the .NET ecosystem, such as Microsoft’s Semantic Kernel, it becomes clear that there is a need for a well-crafted LLM client in the Go ecosystem. While Leizhen Peng’s implementation that wraps DallE, Whisper, and GPT-4 models is impressive, there is room for expansion and improvement in the Golang ecosystem for utilising LLMs. I am looking forward to more public goods!

--

--