Giter Club home page Giter Club logo

openai.ex's People

Contributors

almirsarajcic avatar bfolkens avatar bulld0zer avatar darova93 avatar kentaro avatar kianmeng avatar kpanic avatar mgallo avatar miserlou avatar mrmrinal avatar nallwhy avatar nathanalderson avatar nicnilov avatar pedromvieira avatar rwdaigle avatar shawnleong avatar speerj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai.ex's Issues

HTTPoison error cases aren't handled in Stream.new/1

When handling a streaming request, OpenAI.Client#91 may return %HTTPoison.Error{reason: _, id: nil}, which then causes the following:

** (FunctionClauseError) no function clause matching in anonymous fn/1 in OpenAI.Stream.new/1

Having looked through the OpenAI.Stream module, and to preserve backward compatibility, I propose we handle the error case in the OpenAI.Stream.new/1 resource and return the error as a stream item, similar to the %{"status" => :error} pattern already present when non 200 status codes are received.

Update documentation version

Describe the feature or improvement you're requesting

As the title says, we could use newer version of docs.

Additional context

I have ready PR, it's a small thing :)

Streaming example from docs doesnt work

Describe the feature or improvement you're requesting

Not sure if doing something wrong but generally the example from docs:

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true, # set this param to true
  ]
)
|> Stream.each(fn res ->
  IO.inspect(res)
end)
|> Stream.run()

generates an error:

** (CaseClauseError) no case clause matching: {:ok, %HTTPoison.AsyncResponse{id: #Reference<0.2742941706.3081240585.23996>}}
    (openai 0.5.1) lib/openai/client.ex:26: OpenAI.Client.handle_response/1

Additional context

No response

Chat Support

Describe the feature or improvement you're requesting

This just dropped:
https://platform.openai.com/docs/guides/chat

Would be wonderful to get support in your library for it. If you don't have time in the near future I will try add it myself. Thx!

Additional context

No response

OpenAI Agents Behaviour

Describe the feature or improvement you're requesting

A lot of what's required for defining an agent is also part of the documentation process for methods. maybe there is a behaviour or something that could be used to define similar or use the existing options like @doc and spec.

I'd be happy to help with this, and could put something together for a more formal proposal as well.

Additional context

No response

Handle nginx Error

Got an unexpected error when there servers were misconfigured:

** (CaseClauseError) no case clause matching: {:ok, %HTTPoison.Response{status_code: 503, body: {:error, {:unexpected_token, "<html>\r\n<head><title>503 Service Temporarily Unavailable</title></head>\r\n<body>\r\n<center><h1>503 Service Temporarily Unavailable</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>"}},

Would love this would give an :error

Make URL target a config option to allow for easier local testing and mocking

Describe the feature or improvement you're requesting

I would like to be able use Bypass, or similar, to write local integrations tests without having to hit the actual OpenAI API. Key to this is the ability to set openai.ex's URL, currently hardcoded as @openai_url within OpenAI.Config.

I would like to propose that the openai_url be overridable by a new api_url config option:

config :openai,
  api_key: "your-api-key",
  organization_key: "your-organization-key",
  api_url: "http://localhost/",
  http_options: [recv_timeout: 2_000] 

This could then be overridden in a test setup block like so:

  setup %{conn: conn} do

    # Setup mock OpenAI server
    bypass = Bypass.open()
    Application.put_env(:openai, :api_url, "http://localhost:#{bypass.port}/")

    # ...

    {:ok, bypass: bypass, conn: conn}
  end

Thoughts?

Additional context

No response

What about replace Hackney with Tesla?

Describe the feature or improvement you're requesting

Tesla is easier to control than Hackney.
(ex. http2, retrying, ...)

What about replace Hackney with Tesla, and set default Tesla adapter as Hackney adapter?

Additional context

No response

Streaming example does not work in the shell

hi! first, thanks for your work on this ๐Ÿ˜Š

I've gotten the streaming to work in an .exs file (as demonstrated in #36 ), but it doesn't seem to work in a shell (iex -S mix), it just hangs forever.

is there a fundamental reason that has to do with the shell, or am I just missing something?

`mix-test-watch` dependency running in all environments

Describe the feature or improvement you're requesting

Limit the mix-test-watch dependency to dev and test as suggested in official documentation and avoid mix.deps conflict in case the application is using the default config.

Suggestion (source):

# mix.exs
def deps do
  [
    {:mix_test_watch, "~> 1.0", only: [:dev, :test], runtime: false}
  ]
end

Conflict to avoid:

Dependencies have diverged:
* mix_test_watch (Hex package)
  the :only option for dependency mix_test_watch

  > In mix.exs:
    {:mix_test_watch, "~> 1.1", [env: :prod, hex: "mix_test_watch", only: [:dev, :test], runtime: false, repo: "hexpm"]}

  does not match the :only option calculated for

  > In deps/openai/mix.exs:
    {:mix_test_watch, "~> 1.0", [env: :prod, hex: "mix_test_watch", repo: "hexpm", optional: false]}

  Remove the :only restriction from your dep
** (Mix) Can't continue due to errors on dependencies

Additional context

No response

Improve JSON decoding strategy

Describe the feature or improvement you're requesting

In the current implementation, the HTTP client is very unsafe and slow: https://github.com/mgallo/openai.ex/blob/main/lib/openai/client.ex#L15

Calling String.to_atom/1 is considered a bad practice and should be avoided in frequent code paths like this, since this creates new atoms in the VM memory, which will never be GCed.

The responses are also unstructured for mostly the same reason.
Not to mention that JSON as a library is horribly slow compared to all the other engines: https://gist.github.com/devinus/f56cff9e5a0aa9de9215cf33212085f6

My suggestions:

  • Replace JSON with Jason or Poison
  • Switch to safe atom creation strategy (pretty easy with Jason/Poison to not have to manually do string conversion to existing atoms)
  • Define core API models as structs, and directly decode to them

Additional context

No response

API key error on prod: You didn't provide an API key. You need to provide your API key in an Authorization header

I'm having this problem on prod. The OpenAI call errors out with:

** (MatchError) no match of right hand side value: {:error, %{"error" => %{"code" => nil, "message" => "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.", "param" => nil, "type" => "invalid_request_error"}}}

Example I run on iex, this works in dev, does not in prod:

OpenAI.chat_completion(
  model: "gpt-3.5-turbo",
  messages: [%{role: "user", content: "Hello how are you?"}]
)

I verified that the ENV keys are set properly on prod as well using echo $OPENAI_API_KEY.

My config.exs looks like this:

config :openai,
  # find it at https://platform.openai.com/account/api-keys
  api_key: System.get_env("OPENAI_API_KEY"),
  # find it at https://platform.openai.com/account/org-settings under "Organization ID"
  organization_key: System.get_env("OPENAI_ORGANIZATION_ID")

Any suggestions?

Library doesn't seem to load default config with Phoenix

I am adding this library to my Phoenix Application.

In my runtime, I've added

case System.get_env("OPENAI_API_KEY") do
  nil -> nil
  key -> config :openai, api_key: key
end

I've validated this populates my api key with OpenAI.Config.api_key(). However, when I call OpenAI.audio_transcription(path, %{model: "whisper-1"}) I get back an error that I haven't populated my API key. From my read of the code, I don't know how this library would ever pick up these defaults. It just calls up an empty %Config{} struct.

Unless there's some magic in here where calling this empty struct picks up defaults from the GenServer state. That said, I've validated the genserver is running in my Phoenix application, and calling %OpenAI.Config{} just returns and empty struct:

%OpenAI.Config{
  api_key: nil,
  organization_key: nil,
  http_options: nil,
  api_url: nil
}

Just to be thorough, I also added the configs to my config.exs as well, in case they needed to be available at compile time. I still have the same issue.

Add compatibility with Azure's OpenAI API Endpoints

Describe the feature or improvement you're requesting

Would you also be willing to have a setting to make this library compatible with Azure's version of OpenAI API endpoints?

This would mirror openai library for Python https://github.com/openai/openai-python#microsoft-azure-endpoints
Azure only uses a subset of the endpoints OpenAI provides with a different request URL.

Here is a link to the Swagger doc for the endpoints for auditing if feasible. I am also willing to help add this feature.
https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json

Additional context

No response

Remove applications key from MixProject.application/0

The applications key in OpenAI.MixProject.application/0 is most likely a relict from older mix versions. I don't think it does serve a purpose anymore and we get compiler warnings in our project when compiling this library.

openai.ex/mix.exs

Lines 22 to 23 in dd48493

applications: [:httpoison, :jason, :logger],
extra_applications: [:logger]

If somebody can confirm that there is no specific reason to keep the applications key around, I'm happy to do a PR!

Bug: http_options configuration is not used

I'm using OpenAI chat completion with Stream

in runtime.exs I have the config set as documented:

if config_env() in [:prod, :dev] do
  config :openai,
    # find it at https://platform.openai.com/account/api-keys
    api_key: System.get_env("OPENAI_API_KEY"),
    # find it at https://platform.openai.com/account/org-settings under "Organization ID"
    organization_key: System.get_env("OPENAI_ORG_KEY"),
    # optional, passed to [HTTPoison.Request](https://hexdocs.pm/httpoison/HTTPoison.Request.html) options
    http_options: [recv_timeout: :infinity, stream_to: self(), async: :once]
end

And then running the example from the documentation:

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true,
  ]
)
|> Stream.each(fn res ->
  IO.inspect(res)
end)
|> Stream.run()

But nothing happens, the process hangs indefinitely, with no inspect output.

When creating the stream with inline config, it works OK:

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true,
  ],
  %OpenAI.Config{http_options: [recv_timeout: :infinity, stream_to: self(), async: :once]}
)

But I would prefer to not use inline config, and instead use application config as shown in the documentation.

Intermittent Jason.DecodeError while streaming output

During periods of high volume, and in particular when using some of the gpt-3.5 series models, OpenAI will occasionally split events into multiple chunks. The current approach of splitting each line with "\n" assumes the chunks are complete events. However, this is not always the case.

** (Jason.DecodeError) unexpected end of input at position 18
    (jason 1.4.0) lib/jason.ex:92: Jason.decode!/2
    (elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (openai 0.6.1) lib/openai/stream.ex:57: anonymous fn/1 in OpenAI.Stream.new/1
    (elixir 1.15.6) lib/stream.ex:1626: Stream.do_resource/5
    (elixir 1.15.6) lib/stream.ex:690: Stream.run/1

API key per request

Describe the feature or improvement you're requesting

I believe the API key is currently read once from the environment during configuration and then re-used globally. It would be nice to be able to set the API key per request.

Additional context

We have a multi-tenant use case where multiple OpenAI API keys are present and certain requests must use certain keys.

Feature: Atomize string keys in stream responses

Describe the feature or improvement you're requesting

Currently when stream: true is set, we're receiving responses with string keys:

%{
  "choices" => [
    %{"delta" => %{"role" => "assistant"}, "finish_reason" => nil, "index" => 0}
  ],
  "created" => 1682700668,
  "id" => "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  "model" => "gpt-3.5-turbo-0301",
  "object" => "chat.completion.chunk"
}

In line with the standard (non-stream) responses, I'd expect this map to use atom keys i.e.

%{
  choices: [
    %{delta: %{role: "assistant"}, finish_reason: nil, index: 0}
  ],
  created: 1682700668,
  id: "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  model: "gpt-3.5-turbo-0301",
  object: "chat.completion.chunk"
}

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.