Giter Club home page Giter Club logo

langchain's People

Contributors

alappe avatar amokan avatar benjreinhart avatar benswift avatar bowyern avatar brainlid avatar bwan-nan avatar cardosaum avatar chrisgreg avatar eltonfonseca avatar jadengis avatar matthusby avatar medoror avatar mryawe avatar petrus-jvrensburg avatar pkrawat1 avatar raulchedrese avatar ream88 avatar stevehodgkiss avatar wojtekmach avatar yujonglee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain's Issues

Feature Request: while_needs_response only for functions

What I want to achieve: function calling should be done as a single operation (not delta by delta) like the current implementation works. But content messages should be streamable and receive deltas.

So I can have the current convenience combined with the streaming elegance for the user at the same time! Is this possible somehow without lots of code or a manual loop?

Recommendation for open source models for function calling.

Hi, I know that the library is more optimized towards OAI models but I was wondering if anyone have tried to use Ollama and llama 3 or nous-hermes2pro-llama3-8b for function calling.
I tried with some prompt variations but not much luck.

Support Azure OpenAI

I've managed to use Azure OpenAI with the following minor change:

diff --git a/lib/chat_models/chat_open_ai.ex b/lib/chat_models/chat_open_ai.ex
index 5dd610e..06b4d0c 100644
--- a/lib/chat_models/chat_open_ai.ex
+++ b/lib/chat_models/chat_open_ai.ex
@@ -57,13 +57,13 @@ defmodule LangChain.ChatModels.ChatOpenAI do
           {:ok, Message.t() | MessageDelta.t() | [Message.t() | MessageDelta.t()]}
           | {:error, String.t()}

-  @create_fields [:model, :temperature, :frequency_penalty, :n, :stream, :receive_timeout]
+  @create_fields [:endpoint, :model, :temperature, :frequency_penalty, :n, :stream, :receive_timeout]
   @required_fields [:model]

   @spec get_org_id() :: String.t() | nil
@@ -220,7 +220,7 @@ defmodule LangChain.ChatModels.ChatOpenAI do
       Req.new(
         url: openai.endpoint,
         json: for_api(openai, messages, functions),
-        auth: {:bearer, get_api_key()},
+        headers: %{"api-key" => get_api_key()},
         receive_timeout: openai.receive_timeout
       )

Hope to make a general base to adapt this change to support more chat models.

GroqCloud support

Hi, thanks for creating this nice library 😊

I recently came accros GroqCloud. It seems like an alternative to OpenAI and is powered by open source moduls like Mixtral, Llama3 and Gemma. They have two big advantegaes over OpenAI which is speed and price.

So just qurious if it would be possible and makes sense to add support for GroqCloud?

add Replicate API option

Even though it's just OpenAI for now the code is nice and modular and obviously extensible to other hosted LLM providers (🙌🏻).

I'm not sure if there's a roadmap somewhere that I've missed, but Replicate might be a good option for the next "platform" to be added. It's one place that Meta are putting up their various LLama models. However, I think it'd only support the LangChain.Message stuff - there's no function call support in the models as yet.

I'd be open to putting together a PR to add replicate support (their official Elixir client lib uses httpoison, so I guess it'd be better to just call the Replicate API directly using Req).

Would you be interested in accepting it? Happy to discuss implementation strategies, because I know the move from single -> multiple platform options introduces some decisions & tradeoffs.

Bedrock Support

I am working on this project, and I want to use bedrock as I my chat service of choice, I tested this library with openai chatgpt-4, and it works perfectly.

Here is the code I am testing to connect to the aws bedrock api.

Mix.install([
  {:langchain, "~> 0.3.0-rc.0"},
  {:kino, "~> 0.12.0"}
])

# Set the OpenAI API key
Application.put_env(
  :langchain,
  :openai_key,
  {MY_BEDROCK_API_KEY_GOES_HERE}
)

alias LangChain.Chains.LLMChain
alias LangChain.ChatModels.ChatOpenAI
alias LangChain.Message
alias LangChain.MessageDelta

defmodule Chatbot do
  def start do
    handler = %{
      on_llm_new_delta: fn _model, %MessageDelta{} = data ->
        IO.write(data.content)
      end,
      on_message_processed: fn _chain, %Message{} = data ->
        IO.puts("")
        IO.puts("")
        IO.inspect(data.content, label: "COMPLETED MESSAGE")
      end
    }

    chain =
      %{
        llm:
          ChatOpenAI.new!(%{
            model: "amazon.titan-text-express-v1",
            stream: true,
            callbacks: [handler],
            endpoint:
              "https://bedrock-runtime.us-east-1.amazonaws.com/model/amazon.titan-text-express-v1/invoke"
          }),
        callbacks: [handler]
      }
      |> LLMChain.new!()
      |> LLMChain.add_message(
        Message.new_system!(
          "You are a helpful assistant. Provide concise and accurate responses to user queries."
        )
      )

    chat_loop(chain)
  end

  defp chat_loop(chain) do
    user_input = IO.gets("You: ") |> String.trim()

    if user_input == "exit" do
      IO.puts("Chatbot: Goodbye!")
    else
      {:ok, updated_chain, _response} =
        chain
        |> LLMChain.add_message(Message.new_user!(user_input))
        |> LLMChain.run()

      chat_loop(updated_chain)
    end
  end
end

# Start the chatbot
Chatbot.start()

Can someone help me out, what am I doing wrong? or bedrock isn't supported yet?

Add support for Bumblebee functions?

Bumblebee doesn't support a constrained output of only valid JSON.

In the early days of LangChain, they implemented an alternate approach for functions that was more of a hack, but worked well enough.

Investigate if this approach could work for bringing functions to Bumblebee models. It would still help if the model being run understood JSON, functions, etc.

Upgrade Req library

A new version of the Req library was released before this library was published.

Upgrade to the latest Req. v0.4.x.

The API for streaming responses changed.

`do_process_response` does not contain handling for all inputs.

In chat_open_ai.ex, the function do_process_response can either take a decoded Json, or a {:error, %Jason.DecodeError{}}.

In the definition of the function, we have the matching patterns for properly decoded json, but we lack a definition when the decoding fails.

Example when it fails to match input patterns is reproduced below. It comes from the langchain_demo project (really nice one!)

[debug] HANDLE EVENT "validate" in LangChainDemoWeb.AgentChatLive.Index
  Parameters: %{"_target" => ["chat_message", "content"], "chat_message" => %{"content" => "Hi!"}}
[debug] Replied in 497µs
[debug] HANDLE EVENT "save" in LangChainDemoWeb.AgentChatLive.Index
  Parameters: %{"chat_message" => %{"content" => "Hi!"}}
[debug] Replied in 370µs
[error] Task #PID<0.599.0> started from #PID<0.575.0> terminating
** (FunctionClauseError) no function clause matching in LangChain.ChatModels.ChatOpenAI.do_process_response/1
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:413: LangChain.ChatModels.ChatOpenAI.do_process_response({:error, %Jason.DecodeError{position: 0, token: nil, data: <<31, 139, 8, 0, 0, 0, 0, 0, 0, 3, 76, 143, 177, 110, 3, 33, 16, 68, 251, 251, 138, 17, 181, 185, 139, 173, 40, 150, 248, 134, 148, 233, 207, 8, 54, 6, 9, 88, 12, 123, 78, 44, 203, 255, 30, ...>>}})
    (elixir 1.15.2) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:346: LangChain.ChatModels.ChatOpenAI.decode_streamed_data/1
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:262: anonymous fn/4 in LangChain.ChatModels.ChatOpenAI.do_api_request/4
    (finch 0.16.0) lib/finch/http1/conn.ex:243: Finch.Conn.receive_response/8
    (finch 0.16.0) lib/finch/http1/conn.ex:120: Finch.Conn.request/6
    (finch 0.16.0) lib/finch/http1/pool.ex:45: anonymous fn/8 in Finch.HTTP1.Pool.request/5
    (nimble_pool 1.0.0) lib/nimble_pool.ex:349: NimblePool.checkout!/4
    (finch 0.16.0) lib/finch/http1/pool.ex:38: Finch.HTTP1.Pool.request/5
    (finch 0.16.0) lib/finch.ex:306: anonymous fn/6 in Finch.stream/5
    (telemetry 1.2.1) /Users/mcs/git/github/brainlid/langchain_demo/deps/telemetry/src/telemetry.erl:321: :telemetry.span/3
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:288: anonymous fn/6 in LangChain.ChatModels.ChatOpenAI.do_api_request/4
    (req 0.4.4) lib/req/request.ex:991: Req.Request.run_request/1
    (req 0.4.4) lib/req/request.ex:936: Req.Request.run/1
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:318: LangChain.ChatModels.ChatOpenAI.do_api_request/4
    (langchain 0.1.1) lib/chat_models/chat_open_ai.ex:184: LangChain.ChatModels.ChatOpenAI.call/4
    (langchain 0.1.1) lib/chains/llm_chain.ex:204: LangChain.Chains.LLMChain.do_run/1
    (langchain 0.1.1) lib/chains/llm_chain.ex:186: LangChain.Chains.LLMChain.run_while_needs_response/1
    (langchain_demo 0.1.0) lib/langchain_demo_web/live/agent_chat_live/index.ex:315: anonymous fn/2 in LangChainDemoWeb.AgentChatLive.Index.run_chain/1
    (phoenix_live_view 0.20.0) lib/phoenix_live_view/async.ex:77: Phoenix.LiveView.Async.do_async/5
Function: #Function<8.28433447/0 in Phoenix.LiveView.Async.run_async_task/4>
    Args: []

FunctionParam array not working with OpenAI

FunctionParam array type seems to not work with OpenAI but does with other models (firefunction v2 in my case).

FunctionParam.new!(%{
  name: "synonyms",
  type: :array,
  description: "A list of synonyms for the term (indicated by 'also called' or 'also known as') if present",
  required: false
})

[error] Received error from API: "Invalid schema for function 'information_extraction': In context=('properties', 'info', 'items', 'properties', 'synonyms'), array schema missing items"

But this did work:

FunctionParam.new!(%{
  name: "synonyms",
  type: :object,
  object_properties: [
    FunctionParam.new!(%{name: "synonym", type: :string, required: false})
  ],
  description: "A list of synonyms for the term (indicated by 'also called' or 'also known as') if present",
  required: true
})

It also seems not to respect the required fields.

Optional dependencies/Behaviour-based 'tools'?

Given that the crawling effort was merged in (which adds deps for :floki and :crawly) and the existing :abacus dependency, is there a planned effort to make these sorts of deps optional and maybe implement some sort of behaviour to facilitate making things more flexible for consumers of the project?

I'd be willing to help on this if there is interest.

Nothing at all wrong with any of the deps, but not every use-case will need them.

How to figure out rate limits?

For OpenAI they specify rate limits here. They add fields to the header to show, how many tokens are still left. To build something that respects those limits and retries after the limit hast been reset, it would be great to have those in the response some.
I quickly searched in the code but could not find anything. Is there currently a way to handle this?

Implement the Router chain

The router chain (documented and implemented in Python version), uses classification and branching to change paths for the following prompts.

https://python.langchain.com/docs/modules/chains/foundational/router

Something I want the router to do is support bringing in different functions based on the context. I may have a LOT of potential functions that the LLM could have access to, but I don't want to clutter it and use up tokens when they aren't relevant to the current goal/context. So routing offers a good way for conditionally bringing in other behaviors/functions.

Support Llama 2

Add support for the Llama 2 LLM.

Specifically interested in supporting Nx/Bumblebee usage. A main complaint against OpenAI/ChatGPT and Google/Bard is that private data is being sent to an external entity that is probably being used for training data.

A fully locally hosted and business-use compatible solution is preferred.

Running issue with ChatBumblebee & Llama2

I tried to use ChatBumblebee but it didn't work as expected

This is the livebook instruction I used:

https://gist.github.com/slashmili/ba0ac06a6346e793e357caf940a8a424

When I run the chain, I got lost of warnings :

13:55:43.944 [warning] Streaming call requested but no callback function was given.

And the answer was not as I expected:

COMBINED DELTA MESSAGE RESPONSE: %LangChain.Message{
  content: "I'm just an AI, I don't have access to your personal belongings or the layout of your home, so I cannot accurately locate your hairbrush. However, I can provide you with some general information about where hairbrushes are typically kept in a typical home.\n\nIn many households, hairbrushes are usually stored in a bathroom cabinet or on a bathroom countertop. Some people may also keep their hairbrushes in a dresser drawer or in a designated hair accessory case.\n\nIf you're having trouble finding your hairbrush, you might want to check these locations first. If you're still unable to find it, you could try asking other members of your household if they've seen it or check underneath your bed or in your closet.",
  index: nil,
  status: :complete,
  role: :assistant,
  function_name: nil,
  arguments: nil
}

seems like that it couldn't use Llama to find the right function.

Any idea what did I do wrong?

Call code outside of langchain in routes

I have two related questions for routing:

  1. PromptRoute always requires a chain, but this feels quite limiting. It can be convenient to use the router's outcome as the end result of a pipeline—that is, sometimes I just need to know the selected route so I can delegate to the right part of my code. As is, I have to pass a chain in. It would be very handy to be able to pass in, say, a callback function, that simply gets the name of the chosen route.
  2. Related to that: In the evaluate function of the RoutingChain, the debugger helpfully logs the chosen route, but I can't access that in code, so if I want to do the above (run my own code depending on which route is chosen), I have to create dummy chains for every route, then pattern match on whichever chain is returned by the evaluate function. It's a lot of overhead when the name is right there, just out of reach. :)

Would it be possible to do any of the following:

  1. Set chain to optional on PromptRoute?
  2. Set a callback function on PromptRoute?
  3. In RoutingChain, create a new function that evaluates but just returns the name instead of a chain?

Basically what I'm looking for is any path to delegate out to my code from the existing routing structure without a lot of overhead that's ultimately thrown away. I do, of course, see the value of the chain in this pipeline... it's just that I also know that there are a lot of times when I don't need that overhead.

Also, it's very possible I'm missing something that would allow me to accomplish exactly what I'm asking about, and I've just missed entirely! Let me know which and I'm happy to help out.

Request for Community ChatModel: ChatOllama

I am interested in using the ollama as a self hosted llm model in the Elixir Langchain project. I was wondering if it's possible to extend the support to include ChatModel.ChatOllama as well.

Detail:
Python langchain_community.chat_models.ChatOllama

Request:
I kindly request the community's assistance or guidance on how to integrate ChatOllama into the Langchain project. I am new to Elixir, (no idea what I am doing most of the time) but I am eager to learn and contribute to this exciting project.

I get `{:error, "Unexpected response. {:ok, %LangChain.Chains.LLMChain{ ... }}}` when using the DataExtractionChain

Given the following code in Livebook:

itinerary_1_day = """
Day 1:

Arrive in Delft and check-in at the Hotel De Emauspoort, a cozy boutique hotel located in the heart of the city.
"""

itinerary_day_schema_parameters = %{
  type: "object",
  properties: %{
    destination_name: %{type: "string"},
    destination_type: %{type: "string"}
  },
  required: []
}

# Model setup
{:ok, chat} = LangChain.ChatModels.ChatOpenAI.new(%{model: "gpt-3.5-turbo", temperature: 0, stream: false, verbose: true})

# run the chain on the text information
data_prompt = itinerary_1_day

{:ok, result} = LangChain.Chains.DataExtractionChain.run(chat, itinerary_day_schema_parameters, data_prompt)

I keep getting the following error when running the cell:

** (MatchError) no match of right hand side value: {:error, "Unexpected response. {:ok, %LangChain.Chains.LLMChain{llm: %LangChain.ChatModels.ChatOpenAI{endpoint: \"https://api.openai.com/v1/chat/completions\", model: \"gpt-3.5-turbo\", temperature: 0.0, frequency_penalty: 0.0, receive_timeout: 60000, n: 1, stream: false}, verbose: false, functions: [%LangChain.Function{name: \"information_extraction\", description: \"Extracts the relevant information from the passage.\", function: nil, parameters_schema: %{type: \"object\", required: [\"info\"], properties: %{info: %{type: \"array\", items: %{type: \"object\", required: [], properties: %{destination_name: %{type: \"string\"}, destination_type: %{type: \"string\"}}}}}}}], function_map: %{\"information_extraction\" => %LangChain.Function{name: \"information_extraction\", description: \"Extracts the relevant information from the passage.\", function: nil, parameters_schema: %{type: \"object\", required: [\"info\"], properties: %{info: %{type: \"array\", items: %{type: \"object\", required: [], properties: %{destination_name: %{type: \"string\"}, destination_type: %{type: \"string\"}}}}}}}}, messages: [%LangChain.Message{content: \"You are a helpful assistant that extracts structured data from text passages. Only use the functions you have been provided with.\", index: nil, status: :complete, role: :system, function_name: nil, arguments: nil}, %LangChain.Message{content: \"Extract and save the relevant entities mentioned in the following passage together with their properties.\\n\\n  Passage:\\n  Day 1:\\n\\nArrive in Delft and check-in at the Hotel De Emauspoort, a cozy boutique hotel located in the heart of the city.\\n\", index: nil, status: :complete, role: :user, function_name: nil, arguments: nil}, %LangChain.Message{content: nil, index: 0, status: :complete, role: :assistant, function_name: \"information_extraction\", arguments: %{\"info\" => [%{\"destination_name\" => \"Delft\", \"destination_type\" => \"City\"}, %{\"destination_name\" => \"Hotel De Emauspoort\", \"destination_type\" => \"Hotel\"}]}}], custom_context: nil, delta: nil, last_message: %LangChain.Message{content: nil, index: 0, status: :complete, role: :assistant, function_name: \"information_extraction\", arguments: %{\"info\" => [%{\"destination_name\" => \"Delft\", \"destination_type\" => \"City\"}, %{\"destination_name\" => \"Hotel De Emauspoort\", \"destination_type\" => \"Hotel\"}]}}, needs_response: true, callback_fn: nil}, %LangChain.Message{content: nil, index: 0, status: :complete, role: :assistant, function_name: \"information_extraction\", arguments: %{\"info\" => [%{\"destination_name\" => \"Delft\", \"destination_type\" => \"City\"}, %{\"destination_name\" => \"Hotel De Emauspoort\", \"destination_type\" => \"Hotel\"}]}}}"}
    (stdlib 5.1.1) erl_eval.erl:498: :erl_eval.expr/6
    /home/sylvester/Code/langchain-livebook-examples/embedding-test.livemd#cell:4ashbyu5o6nm3h56cbxgozcrutf33va7:7: (file)

It looks like a valid result, but it's giving me an error anyway. I'm not sure whether I'm doing something wrong or if this is a bug in LangChain.

Update for OpenAI API changes to functions and tools

  • The list of functions is deprecated in favor of "tools" - docs
  • New tool_choice option to add support for. docs
  • function_call is deprecated in favor of tool_choice docs
  • Support receiving multiple function calls from assistant (reported as new ability with gpt-4-preview)

Gemini Pro Issues

It looks like the current implementation of ChatGoogleAI doesn't work with the latest version of the Google Gemini API defined here.

This was enough to get it working for my immediate needs but there may be other issues.

So far the main differences I've come across are:

  • The Gemini API doesn't require a version in the URI path.
  • Response messages no longer contain an index field.
  • The API will not accept messages with an empty text field.

Not sure you'd want a new module for the newest Gemini API or to modify the existing ChatGoogleAI. Either way I'd be happy to put together a PR if it would help.

Add telemetry support?

@brainlid Do you think it would be useful to add telemetry at this point?
I imagine emitting telemetry events for with duration of response cycle, token usage, errors.

If you think it is a good idea, I could work on a PR.

Thanks for the work!

Originally posted by @tubedude in #103 (reply in thread)

%Mint.TransportError{reason: :closed}

I'm intermittently getting %Mint.TransportError{reason: :closed} as a result from do_api_request/4 - has this been seen before or are there any known ideas why?

I can fire off the exact same request afterwards and it works most of them time but 1% of calls are failing...

EDIT: My team found that this is an ongoing error with Finch. Look at the latest comments here.

Are we able to swap the HTTP client out for another?

Support for Embeddings

Hey there!

I'd like to see LangChain supports for embeddings! Probably first the OpenAPI's embeddings and then HuggingFace and others

Is this something that you have thought about and have a plan? I'd love to get some guidance and give it a try.

Disclaimer: I'm new to this field, if I say something that doesn't make sense please point out.

OpenAI Function Call Support

Hi, I am coming from Python and really like what I have seen so far in the Elixir land. I am thinking about porting one of my LLM application to Elixir and one of the problem is that I rely heavily on OpenAI's function call feature.
OpenAI's function call (https://openai.com/blog/function-calling-and-other-api-updates) is a good way to output structure data with LLM and to use external tools. However, to use it wee need to write function parameter specification such as

{
        'name': 'extract_student_info',
        'description': 'Get the student information from the body of the input text',
        'parameters': {
            'type': 'object',
            'properties': {
                'name': {
                    'type': 'string',
                    'description': 'Name of the person'
                },
                'major': {
                    'type': 'string',
                    'description': 'Major subject.'
                }
            }
        }
    }

The python library Instructor (https://github.com/jxnl/instructor) provides a good way to write such specification using another python library Pydantic (https://docs.pydantic.dev/latest/), which is essentially a schema validation library.

Wondering if Elixir provides any way to specify such specification using something more convenience than plain map? If yes, whether this library is a good place to implement such feature?

Vector store (pgvector/pinecone) support?

Thank you @brainlid for starting this project! Played with the two live notebooks, works very well! Simple and very clean, well designed interfaces. 👍

I am wondering what's the roadmap moving forward, especially around vector store support.

Would very much love to migrate my NodeJS langchain projects to Phoenix/Elixir.

Thanks again for the effort! Can't wait to write more using it ❤️

Exposing usage data from API response

Hi @brainlid, first of all thanks for this project, it has been very useful to us so far.

We are running into the issue that we want to be able to track our token usage on OpenAI. This is given as part of the response, but I believe LangChain doesn't do anything with this information yet.

I am wondering if you would consider a PR to expose this data somehow.

And if you are, whether you have a preferred way to do this. We would probably be happy with simply making the raw response available somehow, as a trap door. But if you want to structure this data and translate it per API, we could also talk about that.

Thanks, Derek

Handle streamed JSON response data when broken up across multiple data rows

This is part of Issue #28 but not specific to Azure.

Data is sometimes reported being returned like the following:

DATA:- : "data: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\",\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" if\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" there\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" are\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" specific\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" details\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" about\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" the\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" new\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" developments\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" or\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" the\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" potential\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" value\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" they\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" could\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" bring\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" to\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" The\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" third\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"s"
DATA:- : "afe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" company\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\"2\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\",\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" be\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\n"
DATA:- : "data: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" sure\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" to\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" include\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" those\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" as\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\" well\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\ndata: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":null,\"index\":0,\"delta\":{\"content\":\".\"},\"content_filter_results\":{\"hate\":{\"filtered\":false,\"severity\":\"safe\"},\"self_harm\":{\"filtered\":false,\"severity\":\"safe\"},\"sexual\":{\"filtered\":false,\"severity\":\"safe\"},\"violence\":{\"filtered\":false,\"severity\":\"safe\"}}}]}\n\n"
DATA:- : "data: {\"id\":\"chatcmpl-8arLKTxmyC79hRBZdCAHBJS0hREPa\",\"object\":\"chat.completion.chunk\",\"created\":1703795550,\"model\":\"gpt-4\",\"choices\":[{\"finish_reason\":\"stop\",\"index\":0,\"delta\":{},\"content_filter_results\":{}}]}\n\n"
DATA:- : "data: [DONE]\n\n"
[error] Received invalid JSON: %Jason.DecodeError{position: 177, token: nil, data: "{\"id\":\"chatcmpl-8lZf0buihhFh5SCZrrKbnkiC7RFhu\",\"object\":\"chat.completion.chunk\",\"created\":1706349186,\"model\":\"gpt-3.5-turbo-1106\",\"system_fingerprint\":\"fp_b57c83dd65\",\"choices\":"}
[error] Received invalid JSON: %Jason.DecodeError{position: 74, token: nil, data: "[{\"index\":0,\"delta\":{\"content\":\".\"},\"logprobs\":null,\"finish_reason\":null}]}"}
[error] Received invalid JSON: %Jason.DecodeError{position: 68, token: nil, data: "{\"id\":\"chatcmpl-8lZf0buihhFh5SCZrrKbnkiC7RFhu\",\"object\":\"chat.comple"}
[error] Received invalid JSON: %Jason.DecodeError{position: 0, token: nil, data: "tion.chunk\",\"created\":1706349186,\"model\":\"gpt-3.5-turbo-1106\",\"system_fingerprint\":\"fp_b57c83dd65\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\" a\"},\"logprobs\":null,\"finish_reason\":null}]}"}

Make Elixir Function optional for LangChain.Function?

For workflows where we just need structured JSON outputs from the models (e.g. data extraction) through using tools, we may not need to execute any code in the client and send any messages back to the models. For such cases, does it make sense to make the function (Elixir Function) attribute optional for LangChain.Function?

(From Anthropic API Docs)

### How tool use works
Integrate external tools with Claude in these steps:

1. Provide Claude with tools and a user prompt
- Define tools with names, descriptions, and input schemas in your API request.
- Include a user prompt that might require these tools, e.g., “What’s the weather in San Francisco?”

2. Claude decides to use a tool
- Claude assesses if any tools can help with the user’s query.
- If yes, Claude constructs a properly formatted tool use request.
- The API response has a stop_reason of tool_use, signaling Claude’s intent.

3. Extract tool input, run code, and return results
- On your end, extract the tool name and input from Claude’s request.
- Execute the actual tool code client-side.
- Continue the conversation with a new user message containing a tool_result content block.

4. Claude uses tool result to formulate a response
- Claude analyzes the tool results to craft its final response to the original user prompt.

**Note: Steps 3 and 4 are optional. For some workflows, Claude’s tool use request (step 2) might be all you need, without sending results back to Claude.**

Multiple tool calls: Frequent timeouts

I'm experiencing this frequency:

** (exit) exited in: Task.await_many([%Task{mfa: {:erlang, :apply, 2}, owner: #PID<0.27977.0>, pid: #PID<0.29053.0>, ref: #Reference<0.0.3581059.2102371376.702611457.169984>}], 5000)
    ** (EXIT) time out
    (elixir 1.15.7) lib/task.ex:969: Task.await_many/5
    (elixir 1.15.7) lib/task.ex:953: Task.await_many/2
    (langchain 0.2.0) lib/chains/llm_chain.ex:456: LangChain.Chains.LLMChain.execute_tool_calls/2
    (langchain 0.2.0) lib/chains/llm_chain.ex:189: LangChain.Chains.LLMChain.run_while_needs_response/1

Can we make the Task.await_many timeout configurable (either via config or pass as an optional arg in the run call)? Glad to submit a PR.

Is it possible to use this library to send an image for use with the "gpt-4-vision-preview" model?

I am experimenting with different libs to work with the OpenAI GPT APIs. I am trying to work out how to send an image with LangChain but nothing I do seems to work. I had a similar issue with ExOpenAI library (now solved) which you can see here: dvcrn/ex_openai#13

I am trying to work out how to do something equivalent to this (which uses a raw HTTPoison's HTML post request):

  defp describe_image_using_httpoison(data, prompt) do
    payload = %{
      "model" => get_openai_description_model(),
      "messages" => [
        %{
          "role" => "user",
          "content" => [
            %{"type" => "text", "text" => prompt},
            %{
              "type" => "image_url",
              "image_url" => %{
                "url" => "data:image/jpeg;base64," <> data.image.data64
              }
            }
          ]
        }
      ],
      "max_tokens" => 1_000
    }

    case HTTPoison.post!(
           "https://api.openai.com/v1/chat/completions",
           Jason.encode!(payload),
           get_headers(),
           recv_timeout: 20000
         ) do
      %HTTPoison.Response{status_code: 200, body: body} ->
        case Jason.decode(body) do
          {:ok, content} ->
            [result | _] = content["choices"]
            description = result["message"]["content"]
            description

          error ->
            dbg(error)
        end

      error ->
        dbg(error)
    end
  end

That is just a snippet, but the "type" => "image_url"... bit is the bit I am trying to replicate with LangChain.

I have tried this:

  def describe(data, user_prompt \\ @desc_user_prompt) do
    {:ok, _updated_chain, response} =
      %{llm: ChatOpenAI.new!(%{model: @llm_model})}
      |> LLMChain.new!()
      |> LLMChain.add_messages([
        Message.new_system!(@desc_system_prompt),
        Message.new_user!(user_prompt),
        Message.new_user!(get_prompt_attrs_for_image_from_data(data.image))
      ])
      |> LLMChain.run()

    dbg(response)
    Map.put(data, :description, response)
    response.content
  end

  defp get_prompt_attrs_for_image_from_data(%STL.ML.ImageData{src: _, data64: imgdata64, error: _ }) do
    {:ok, content } = %{
      type: :image_url,
      image_url: %{url: "data:image/jpeg;base64," <> imgdata64}
    } |> Jason.encode()
    content
    # %{
    #   role: :user,
    #   content: %{
    #     type: :image_url,
    #     image_url: %{url: "data:image/jpeg;base64," <> imgdata64}
    #   }
    # }
  end

But no matter what I do with the function get_prompt_attrs_for_image_from_data nothing seems to work. If I just encode the content as a string, then the OpenAI API flips out with a "too many tokens" because the image data is too big. But anything other than a string for content causes a validation error from LangChain.

Is there any way to send arbitrary post params in a LangChain call?

PS: For reference, this is how OpenAI describes the type: :image_url params: https://platform.openai.com/docs/guides/vision

Unable to stream response from OpenAI after executing tool calls

Streaming works great normally, however after executing a tool call requested by the LLM (in this case OpenAI) I'm unable to figure out how to stream the final response.

I haven't found a reason why in the code or docs, is this a limitation of OpenAI? If so feel free to close this issue 🙂

Potentially related: #10

Interrupting completion stream

Is there a way to interrupt the generation stream? It's technically possible, but I haven't found any mention in the docs.

It can be useful for user-facing frontends when a user can abort the answer of the assistant in the middle and rephrase the task.

OpenAI forum: https://community.openai.com/t/interrupting-completion-stream-in-python/30628

Ashton1998
Jun 2023
I make a simple test for @thehunmonkgroup 's solution.

I make a call to gpt-3.5-turbo model with input:

Please introduce GPT model structure as detail as possible
And let the api print all the token’s. The statistic result from OpenAI usage page is (I am a new user and is not allowed to post with >media, so I only copy the result):
17 prompt + 441 completion = 568 tokens

After that, I stop the generation when the number of token received is 9, the result is:
17 prompt + 27 completion = 44 tokens

It seems there are roughly extra 10 tokens generated after I stop the generation.

Then I stop the generation when the number is 100, the result is:
17 prompt + 111 completion = 128 tokens

So I think the solution work well but with extra 10~20 tokens every time.

`LLMChain.run()` for ollama doesn't work

alias LangChain.{Chains.LLMChain, Message, LangChain.ChatModels.ChatOllamaAI}
LLMChain.new!(%{
  llm: LangChain.ChatModels.ChatOllamaAI.new!(%{
    model: "mixtral:8x7b-instruct-v0.1-q4_K_M"
  }),
  verbose: true,
})
|> LLMChain.add_message(Message.new_user!("hello world!"))
|> LLMChain.run()
** (CaseClauseError) no case clause matching: {:ok, %LangChain.Message{content: " Hello! It's nice to see you. Is there something specific you would like to talk about or ask me a question? I'm here to help with any programming-related questions you have.\n\nIf you don't have anything specific in mind, that's okay too! We can just chat about whatever interests you. Do you enjoy coding? What are some of your favorite programming languages and why do you like them?\n\nI'm particularly fond of Python, because it's a great language for beginners to learn, with a clean and easy-to-understand syntax. It's also very versatile and can be used for a wide variety of applications, from web development to data analysis to machine learning.\n\nBut enough about me! I want to hear about you. What brings you to coding? Is there something specific you're hoping to learn or accomplish with your programming skills? Let me know and I'll do my best to help you out.", index: nil, status: :complete, role: :assistant, function_name: nil, arguments: nil}}
    (langchain 0.1.7) lib/chains/llm_chain.ex:209: LangChain.Chains.LLMChain.do_run/1
    (langchain 0.1.7) lib/chains/llm_chain.ex:170: LangChain.Chains.LLMChain.run/2
    iex:4: (file)

Upgrade abacus requirements to support elixir 1.17 and OTP 27

With elixir 1.17 and OTP 27, mix test will fail with the following error:

==> abacus
Compiling 3 files (.erl)
src/new_parser.yrl:54:7: syntax error before: 'else'
%   54|       {'else', '$5'}
%     |       ^

src/new_parser.erl:819:13: function yeccpars2_46_/1 undefined
%  819|  NewStack = yeccpars2_46_(Stack),
%     |             ^

src/new_parser.yrl:71:2: inlined function yeccpars2_46_/1 undefined
%   71| expr -> expr '/' expr : {'/', [], ['$1', '$3']}.
%     |  ^

could not compile dependency :abacus, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile abacus --force", update it with "mix deps.update abacus" or clean it with "mix deps.clean abacus"

I've submitted a pr to abacus narrowtux/abacus#27 to fix it. Abacus requirements will need to be upgraded once it's released.

In the meantime, you can try the fix by using this in mix.exs:

{:abacus, github: "MrYawe/abacus"},

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.