sashabaranov / go-openai Goto Github PK
View Code? Open in Web Editor NEWOpenAI ChatGPT, GPT-3, GPT-4, DALL·E, Whisper API wrapper for Go
License: Apache License 2.0
OpenAI ChatGPT, GPT-3, GPT-4, DALL·E, Whisper API wrapper for Go
License: Apache License 2.0
I would like to have timeout configurable or better pick from the context sent.
Currently this library has it as default 1minute but most of complex the API calls to openai goes beyond 1min
I'm trying to get started with the example from the readme file. I'm pretty sure that I'm providing the correct API key but it gives me the following error:
2023/01/12 02:32:32 error, status code: 401
exit status 1
The command below with the same API key gives me the status 200 OK.
curl https://api.openai.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"model": "text-davinci-003", "prompt": "Say this is a test", "temperature": 0, "max_tokens": 7}'
How to keep the context in the conversation
I'm trying to do a continuous question and answer session, but each time it's a new context and he can't analyze the question I asked earlier.
I think there should be an attribute like session_id, but there isn't, so how should I do it?
// CompletionRequest represents a request structure for completion API.
type CompletionRequest struct {
Model string `json:"model"`
Prompt string `json:"prompt,omitempty"`
Suffix string `json:"suffix,omitempty"`
MaxTokens int `json:"max_tokens,omitempty"`
Temperature float32 `json:"temperature,omitempty"`
TopP float32 `json:"top_p,omitempty"`
N int `json:"n,omitempty"`
Stream bool `json:"stream,omitempty"`
LogProbs int `json:"logprobs,omitempty"`
Echo bool `json:"echo,omitempty"`
Stop []string `json:"stop,omitempty"`
PresencePenalty float32 `json:"presence_penalty,omitempty"`
FrequencyPenalty float32 `json:"frequency_penalty,omitempty"`
BestOf int `json:"best_of,omitempty"`
LogitBias map[string]int `json:"logit_bias,omitempty"`
User string `json:"user,omitempty"`
}
This request body, none of which identifies the session attributes
Similar to CreateCompletion
and CreateCompletionStream
, we need a CreateChatCompletionStream
for CreateChatCompletion
.
Hi Team,
I have tried to use the codex model, but there is no support for the beta version API. Do you guys think we should add support for the beta version API? I would be happy to help with this.
Are you considering introducing generics in future releases?
If we introduce a generics:
Pros: sdk implementation will potentially become more elegant.
Cons: only support go1.8.0 or above, compatibility becomes worse.
I am getting a wide variety of results from GPT, but my temperature is 0. I'm using a custom-trained model. I contacted GPT/openai, but their customer support seems to be inadequate.
req := gogpt.CompletionRequest{
Model: "davinci:ft-text-friday-2022-10-02-03-58-57",
MaxTokens: 100, // 16 is default
Prompt: prompt,
Temperature: 0,
//TopP: 1,
//FrequencyPenalty: 0,
//PresencePenalty: 0,
Stop: []string{" ->", ">>>"},
}
Example 1
TRC Respond in JSON. I want to buy water. ->
8:02AM TRC GPT 5
8:02AM TRC GPT 6
8:02AM TRC GPT 6.a
8:02AM TRC returned: null
We'll update the last message sent to match the reply, and we'll update the request.id to say:
{ 'Store': 'water', 'Category': 'latest', 'Message': 'I want to buy water', 'Date': '2016-09-21T10:52:13', 'RequestId': '20', 'Reply': 'null', 'ArrivalDate': '2016-09-21T10:52:13'}
Example 2, sent seconds later
8:02AM TRC Respond in JSON. I want to buy water. ->
8:02AM TRC GPT 5
8:02AM TRC GPT 6
8:02AM TRC GPT 6.a
8:02AM TRC returned: {'Function': 'contact_support', 'Noun': 'water', 'Adjectives': [], 'Category': '', 'ClothingType': '', 'Quantity': 0, 'ArrivalDate': '', 'Store': '', 'MinPrice': 0, 'MaxPrice': 0, 'Multiple': False}
For some practical reasons, requests in my place occasionally time out, so do you plan to add the ability to retry in the project?
At present, I don't know how to get the original error message.
I need to make conditional judgments based on the original error code to do different things.
Please let me know if you have a solution, thanks
Hey all! I'd love to see support for the newly-released edits endpoint.
OpenAI's API has recently changed so API calls are made to endpoints such as /v1/completions
for completions instead of /v1/engines/${engineID}/completions
.
I'm currently migrating to using go-gpt3 instead of my own client, and would like to incorporate a mock server for simulating responses. This would be incredibly helpful to avoid handling the old API.
it seems that only this interface is missing and has not been implemented.
https://platform.openai.com/docs/api-reference/fine-tunes/create
We should track code coverage of the go-gpt3
client with a tool such as codecov
.
Current embeddings API might use an improvement:
float32
instead of float64
(I highly doubt that OpenAI uses 64-bit floats in their LLM)Hi 👋 Thanks for developing a great library!
Openai create-prompt documentation says the prompt
field is supported encoded as a string, array of strings, array of tokens, or array of token arrays.
but this library seems to be supported only encoded as a string.
openai-js implemented type for that like this: https://github.com/threepointone/openai-js/blob/eaade749f2ead531d2ac9a2015184f7b6418a581/api.ts#L475
So I want to ask the question, How can I send a completion request with an array of string?
Thanks
I think we can do a better job with documentation by providing testable examples https://go.dev/blog/examples
In some network cases, it is impossible to access api.openai.com directly. Do you consider adding proxy function during initialization?
Hi,
Thanks for your excellent work.
Yesterday, doing calls to CreateCompletionStream()
function I started getting the following error:
Stream error: stream has sent too many empty messages
Requests were like this:
{
"model": "text-davinci-003",
"prompt": "В чем смысл жизни?",
"max_tokens": 512,
"temperature": 0.7,
"stream": true
}
Seems like it's related to
emptyMessagesLimit = 100
in stream.go.
My code:
stream, err := c.CreateCompletionStream(context.Background(), params)
if err != nil {
return
}
defer stream.Close()
for {
response, err := stream.Recv()
if errors.Is(err, io.EOF) {
return
}
if err != nil {
fmt.Printf("Stream error: %v\n", err)
return
}
choices := response.Choices
if len(choices) > 0 {
fmt.Printf("%s", choices[0].Text)
}
}
I copied it from your example. Before yesterday it worked fine. Today this error seems to persist.
Could you please check it or make the variable configurable?
Thanks again.
It would be nice to have improved error reporting for streams (based on #68)
new interface address:https://platform.openai.com/docs/api-reference/chat/create
Updated the request interface, as well as the request parameters, there may be a lot of things that need to be changed
I'd like to know if I'm already following up.
the code is bellow :
for {
response, err := stream.Recv()
if errors.Is(err, io.EOF) {
fmt.Println("Stream finished")
return
}
if err != nil {
fmt.Printf("Stream error: %v\n", err)
return
}
fmt.Printf("%v\n", err)
fmt.Printf("%T\n", response.Choices[0].Text)
//fmt.Printf("%s", response.Choices[0].Text)
}
this is echo :
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
string
<nil>
panic: runtime error: index out of range [0] with length 0
goroutine 1 [running]:
main.main()
/home/ly/Desktop/chatgpt/mydemo2/main.go:42 +0x545
then answer is over err is not equal io.EOF
if errors.Is(err, io.EOF) {
fmt.Println("Stream finished")
return
}
https://community.openai.com/t/answers-classification-search-endpoint-deprecation/18532/1
Trying to run the example you will get an error because search was deprecated
OpenAI announced support for a new chat API like chatGPT. This has a similar request and response model to the completions endpoint but with slight variations.
POST: https://api.openai.com/v1/chat/completions
Request:
{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}
Response:
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello there, how may I assist you today?",
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
Further details can be found here: https://platform.openai.com/docs/guides/chat
They said:
In the future, we will deprecate
content-filter-alpha
in favor of the moderation endpoint. For now, we recommend that users begin transitioning to the new endpoint for testing.
Personally, I actively use the content-filter-alpha
as this is required if your apps were already approved by OpenAI.
This is also required if one wants to pass the app review's Standard safety requirements.
Would love to see this being supported soon~
Here are some details about the moderation endpoint:
I can see that the CompletionRequest takes a Stream bool, but I don't see anything in the code that would indicate streaming support or functionality. I could very easily be missing something though, so wanted to check with you. Thanks.
Use Go to implement this function: https://platform.openai.com/tokenizer
CreateCompletionStream:
http response status not 2xx but return empty stream insteat of reporting error
expected: return an error
when i change model to gpt3dot5truo api return error msg like this :
error, status code: 404, message: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
maybe then you use out money ,it should return error ,like
You exceeded your current quota, please check your plan and billing details.
this is a apikey for test : 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
The 'omitempty' option of the request structs should be removed. This is because it generates an incorrect request when a parameter is 0. For example, a request with the temperate field set to '0' will actually return a response as if the field had been set to one. This is because the go JSON parser does not differentiate between a 0 value and no value for float32, so Openai will receive a request without a temperate field which then defaults to a temperature value of 1.
For some reason, the API is banned in some countries.
After added the following codes, it works:
config := gogpt.DefaultConfig("token")
// 创建一个 HTTP Transport 对象,并设置代理服务器
proxyUrl, err := url.Parse("http://localhost:{port}")
if err != nil {
panic(err)
}
transport := &http.Transport{
Proxy: http.ProxyURL(proxyUrl),
}
// 创建一个 HTTP 客户端,并将 Transport 对象设置为其 Transport 字段
config.HTTPClient = &http.Client{
Transport: transport,
}
c := gogpt.NewClientWithConfig(config)
Hey! Is it possible to add the option to specify a file ID in the SearchRequest
and the return_metadata
?
https://beta.openai.com/docs/api-reference/searches/create#searches/create-file
authToken cannot be passed in when setting http proxy. When setting custom config using NewClientWithConfig(), authToken cannot be set externally because authToken is in lower case. You want to provide a way to set authToken.
I want to use GPT3TextDavinci003
but it can not remober last question , And it supported ?
Based on a request from #27, we should introduce a mocked server which tests that values are correctly sent by go-gpt3
and received by the server, as well as the ability for the client to receive data from the server.
This can help us to quickly iterate on tests without spending any OpenAI credits.
We need the following to complete this task
Streaming Response Example Model Using gpt-3.5-turbo, stream.Recv always returns isFinished , text-davinci-003 is fine
func test1(key, input string, callback func(message string)) {
config := gogpt.DefaultConfig(key)
config.EmptyMessagesLimit = 10000
c := gogpt.NewClientWithConfig(config)
ctx := context.Background()
req := gogpt.CompletionRequest{
Model: gogpt.GPT3TextDavinci003,
MaxTokens: 5,
Prompt: input,
Stream: true,
}
stream, err := c.CreateCompletionStream(ctx, req)
if err != nil {
return
}
defer stream.Close()
i := 0
for {
i++
response, err := stream.Recv()
if errors.Is(err, io.EOF) {
fmt.Println(err.Error())
fmt.Println("Stream finished")
return
}
if err != nil {
fmt.Printf("Stream error: %v\n", err)
return
}
var txt string
if len(response.Choices) > 0 {
txt = response.Choices[0].Text
}
callback(txt)
//fmt.Printf("Stream response: %v\n", response.Choices[0])
}
}
the version is v1.2.0
thanks
Please forgive me if I am asking an insanely stupid question. I got the go-gpt3 code, then copied verbatim the example usage, and when I try to run the example code I get no response.
I did not download anything else besides go-gpt3, therefore, here is the (potentially insanely stupid question): Do I need to download something else. e.g., the openai API in Python or in .js?
I don't like Python or JS, I would rather use pure Go if possible, but if there is no other choice and I have to get the Python or the JS version, which one would be the better choice?
Thanks you
I am creating this issue to introduce myself as well. I am using go-gpt3 as the underlying library for my tool geppetto, and would like to provide an easy to use CLI tool for all openai APIs. As I was going through, I noticed that the search API was deprecated (and in fact, not even reachable from the doc page).
Should we mark it as deprecated in the library as well?
It seems each API request returns a new conversation, it can't answer based on the previous conversation.
So how can I use this API, and keep the conversation context?
I have upgraded to the latest version but it always return EOF, and the previous version was normal
func test1(key, input string, callback func(message string)) {
config := gogpt.DefaultConfig(key)
config.EmptyMessagesLimit = 10000
c := gogpt.NewClientWithConfig(config)
ctx := context.Background()
req := gogpt.CompletionRequest{
Model: gogpt.GPT3Dot5Turbo0301,
MaxTokens: 100,
Prompt: input,
Stream: true,
}
stream, err := c.CreateCompletionStream(ctx, req)
if err != nil {
return
}
defer stream.Close()
i := 0
for {
i++
response, err := stream.Recv()
if errors.Is(err, io.EOF) {
fmt.Println(err.Error())
fmt.Println("Stream finished")
return
}
if err != nil {
fmt.Printf("Stream error: %v\n", err)
return
}
var txt string
if len(response.Choices) > 0 {
txt = response.Choices[0].Text
}
callback(txt)
//fmt.Printf("Stream response: %v\n", response.Choices[0])
}
}
key := "sk-Z5vCOAEQ3IINgZ0KpzDkT3BlbkFJ******7BQtM"
test1(key, "hello", func(message string) {
fmt.Print(message)
})
In the recv method is that it does not check the response status code, and it returns an empty message instead of an error message when the status is not 200.
To fix this issue, the recv method should check the status code of the response and return an error message if the status code is not 200. The error message should be of type ErrorResponse with an APIError field containing the appropriate error information.
type APIError struct {
Code *string `json:"code,omitempty"`
Message string `json:"message"`
Param *string `json:"param,omitempty"`
Type string `json:"type"`
StatusCode int `json:"-"`
}
type ErrorResponse struct {
Error *APIError `json:"error,omitempty"`
}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.