mgallo / openai.ex Goto Github PK
View Code? Open in Web Editor NEWcommunity-maintained OpenAI API Wrapper written in Elixir.
License: MIT License
community-maintained OpenAI API Wrapper written in Elixir.
License: MIT License
When handling a streaming request, OpenAI.Client#91 may return %HTTPoison.Error{reason: _, id: nil}
, which then causes the following:
** (FunctionClauseError) no function clause matching in anonymous fn/1 in OpenAI.Stream.new/1
Having looked through the OpenAI.Stream
module, and to preserve backward compatibility, I propose we handle the error case in the OpenAI.Stream.new/1
resource and return the error as a stream item, similar to the %{"status" => :error}
pattern already present when non 200 status codes are received.
As the title says, we could use newer version of docs.
I have ready PR, it's a small thing :)
Not sure if doing something wrong but generally the example from docs:
OpenAI.chat_completion([
model: "gpt-3.5-turbo",
messages: [
%{role: "system", content: "You are a helpful assistant."},
%{role: "user", content: "Who won the world series in 2020?"},
%{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
%{role: "user", content: "Where was it played?"}
],
stream: true, # set this param to true
]
)
|> Stream.each(fn res ->
IO.inspect(res)
end)
|> Stream.run()
generates an error:
** (CaseClauseError) no case clause matching: {:ok, %HTTPoison.AsyncResponse{id: #Reference<0.2742941706.3081240585.23996>}}
(openai 0.5.1) lib/openai/client.ex:26: OpenAI.Client.handle_response/1
No response
This just dropped:
https://platform.openai.com/docs/guides/chat
Would be wonderful to get support in your library for it. If you don't have time in the near future I will try add it myself. Thx!
No response
A lot of what's required for defining an agent is also part of the documentation process for methods. maybe there is a behaviour
or something that could be used to define similar or use the existing options like @doc
and spec
.
I'd be happy to help with this, and could put something together for a more formal proposal as well.
No response
There are OpenAI API endpoints for audio transcription and translation, would be great if this lib supported them.
Got an unexpected error when there servers were misconfigured:
** (CaseClauseError) no case clause matching: {:ok, %HTTPoison.Response{status_code: 503, body: {:error, {:unexpected_token, "<html>\r\n<head><title>503 Service Temporarily Unavailable</title></head>\r\n<body>\r\n<center><h1>503 Service Temporarily Unavailable</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>"}},
Would love this would give an :error
It would be helpful to specify the HTTPoison options to avoid timeouts, etc.
I would like to be able use Bypass, or similar, to write local integrations tests without having to hit the actual OpenAI API. Key to this is the ability to set openai.ex's URL, currently hardcoded as @openai_url
within OpenAI.Config
.
I would like to propose that the openai_url
be overridable by a new api_url
config option:
config :openai,
api_key: "your-api-key",
organization_key: "your-organization-key",
api_url: "http://localhost/",
http_options: [recv_timeout: 2_000]
This could then be overridden in a test setup block like so:
setup %{conn: conn} do
# Setup mock OpenAI server
bypass = Bypass.open()
Application.put_env(:openai, :api_url, "http://localhost:#{bypass.port}/")
# ...
{:ok, bypass: bypass, conn: conn}
end
Thoughts?
No response
Tesla is easier to control than Hackney.
(ex. http2, retrying, ...)
What about replace Hackney with Tesla, and set default Tesla adapter as Hackney adapter?
No response
Currently, the request_options argument is set to []
by default, which means that request_options()
is never called before passing to post(...)
.
hi! first, thanks for your work on this ๐
I've gotten the streaming to work in an .exs
file (as demonstrated in #36 ), but it doesn't seem to work in a shell (iex -S mix
), it just hangs forever.
is there a fundamental reason that has to do with the shell, or am I just missing something?
Limit the mix-test-watch
dependency to dev
and test
as suggested in official documentation and avoid mix.deps
conflict in case the application is using the default config.
Suggestion (source):
# mix.exs
def deps do
[
{:mix_test_watch, "~> 1.0", only: [:dev, :test], runtime: false}
]
end
Conflict to avoid:
Dependencies have diverged:
* mix_test_watch (Hex package)
the :only option for dependency mix_test_watch
> In mix.exs:
{:mix_test_watch, "~> 1.1", [env: :prod, hex: "mix_test_watch", only: [:dev, :test], runtime: false, repo: "hexpm"]}
does not match the :only option calculated for
> In deps/openai/mix.exs:
{:mix_test_watch, "~> 1.0", [env: :prod, hex: "mix_test_watch", repo: "hexpm", optional: false]}
Remove the :only restriction from your dep
** (Mix) Can't continue due to errors on dependencies
No response
in some sence api_url is not "https://api.openai.com/", and the pr is #40
No response
In the current implementation, the HTTP client is very unsafe and slow: https://github.com/mgallo/openai.ex/blob/main/lib/openai/client.ex#L15
Calling String.to_atom/1
is considered a bad practice and should be avoided in frequent code paths like this, since this creates new atoms in the VM memory, which will never be GCed.
The responses are also unstructured for mostly the same reason.
Not to mention that JSON
as a library is horribly slow compared to all the other engines: https://gist.github.com/devinus/f56cff9e5a0aa9de9215cf33212085f6
My suggestions:
JSON
with Jason
or Poison
No response
I'm having this problem on prod. The OpenAI call errors out with:
** (MatchError) no match of right hand side value: {:error, %{"error" => %{"code" => nil, "message" => "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.", "param" => nil, "type" => "invalid_request_error"}}}
Example I run on iex, this works in dev, does not in prod:
OpenAI.chat_completion(
model: "gpt-3.5-turbo",
messages: [%{role: "user", content: "Hello how are you?"}]
)
I verified that the ENV keys are set properly on prod as well using echo $OPENAI_API_KEY
.
My config.exs
looks like this:
config :openai,
# find it at https://platform.openai.com/account/api-keys
api_key: System.get_env("OPENAI_API_KEY"),
# find it at https://platform.openai.com/account/org-settings under "Organization ID"
organization_key: System.get_env("OPENAI_ORGANIZATION_ID")
Any suggestions?
I am adding this library to my Phoenix Application.
In my runtime, I've added
case System.get_env("OPENAI_API_KEY") do
nil -> nil
key -> config :openai, api_key: key
end
I've validated this populates my api key with OpenAI.Config.api_key()
. However, when I call OpenAI.audio_transcription(path, %{model: "whisper-1"})
I get back an error that I haven't populated my API key. From my read of the code, I don't know how this library would ever pick up these defaults. It just calls up an empty %Config{} struct.
Unless there's some magic in here where calling this empty struct picks up defaults from the GenServer state. That said, I've validated the genserver is running in my Phoenix application, and calling %OpenAI.Config{}
just returns and empty struct:
%OpenAI.Config{
api_key: nil,
organization_key: nil,
http_options: nil,
api_url: nil
}
Just to be thorough, I also added the configs to my config.exs as well, in case they needed to be available at compile time. I still have the same issue.
Would you also be willing to have a setting to make this library compatible with Azure's version of OpenAI API endpoints?
This would mirror openai library for Python https://github.com/openai/openai-python#microsoft-azure-endpoints
Azure only uses a subset of the endpoints OpenAI provides with a different request URL.
Here is a link to the Swagger doc for the endpoints for auditing if feasible. I am also willing to help add this feature.
https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json
No response
The applications
key in OpenAI.MixProject.application/0
is most likely a relict from older mix versions. I don't think it does serve a purpose anymore and we get compiler warnings in our project when compiling this library.
Lines 22 to 23 in dd48493
If somebody can confirm that there is no specific reason to keep the applications key around, I'm happy to do a PR!
I'm using OpenAI chat completion with Stream
in runtime.exs
I have the config set as documented:
if config_env() in [:prod, :dev] do
config :openai,
# find it at https://platform.openai.com/account/api-keys
api_key: System.get_env("OPENAI_API_KEY"),
# find it at https://platform.openai.com/account/org-settings under "Organization ID"
organization_key: System.get_env("OPENAI_ORG_KEY"),
# optional, passed to [HTTPoison.Request](https://hexdocs.pm/httpoison/HTTPoison.Request.html) options
http_options: [recv_timeout: :infinity, stream_to: self(), async: :once]
end
And then running the example from the documentation:
OpenAI.chat_completion([
model: "gpt-3.5-turbo",
messages: [
%{role: "system", content: "You are a helpful assistant."},
%{role: "user", content: "Who won the world series in 2020?"},
%{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
%{role: "user", content: "Where was it played?"}
],
stream: true,
]
)
|> Stream.each(fn res ->
IO.inspect(res)
end)
|> Stream.run()
But nothing happens, the process hangs indefinitely, with no inspect output.
When creating the stream with inline config, it works OK:
OpenAI.chat_completion([
model: "gpt-3.5-turbo",
messages: [
%{role: "system", content: "You are a helpful assistant."},
%{role: "user", content: "Who won the world series in 2020?"},
%{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
%{role: "user", content: "Where was it played?"}
],
stream: true,
],
%OpenAI.Config{http_options: [recv_timeout: :infinity, stream_to: self(), async: :once]}
)
But I would prefer to not use inline config, and instead use application config as shown in the documentation.
During periods of high volume, and in particular when using some of the gpt-3.5 series models, OpenAI will occasionally split events into multiple chunks. The current approach of splitting each line with "\n" assumes the chunks are complete events. However, this is not always the case.
** (Jason.DecodeError) unexpected end of input at position 18
(jason 1.4.0) lib/jason.ex:92: Jason.decode!/2
(elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
(elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
(openai 0.6.1) lib/openai/stream.ex:57: anonymous fn/1 in OpenAI.Stream.new/1
(elixir 1.15.6) lib/stream.ex:1626: Stream.do_resource/5
(elixir 1.15.6) lib/stream.ex:690: Stream.run/1
I believe the API key is currently read once from the environment during configuration and then re-used globally. It would be nice to be able to set the API key per request.
We have a multi-tenant use case where multiple OpenAI API keys are present and certain requests must use certain keys.
Currently when stream: true
is set, we're receiving responses with string keys:
%{
"choices" => [
%{"delta" => %{"role" => "assistant"}, "finish_reason" => nil, "index" => 0}
],
"created" => 1682700668,
"id" => "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
"model" => "gpt-3.5-turbo-0301",
"object" => "chat.completion.chunk"
}
In line with the standard (non-stream) responses, I'd expect this map to use atom keys i.e.
%{
choices: [
%{delta: %{role: "assistant"}, finish_reason: nil, index: 0}
],
created: 1682700668,
id: "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
model: "gpt-3.5-turbo-0301",
object: "chat.completion.chunk"
}
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.