Giter Club home page Giter Club logo

ollama-ai-provider's Introduction

Hi! I'm Sergio

ollama-ai-provider's People

Contributors

matrushka avatar sgomez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ollama-ai-provider's Issues

Add llama3.1

Hi @sgomez

Thank you for creating ollama provider.

I would like to add support for a new llama3.1 models.
Is changing ollama-chat-settigs.ts enough to add support for llama3.1 models?

Error: Unsupported role: tool

Describe the bug
When attempting to use the maxToolRoundtrips parameter in the generateText method from the ai package, I get the following error: Unsupported role: tool. I tracked the error down to this package, as it doesn't occur with the openai package from @ai-sdk/openai.

To Reproduce
Steps to reproduce the behavior:

  1. Clone my example repo
  2. Install the dependencies with pnpm install
  3. Run the sample code with pnpm tsx index.ts

inconsistent responses in generateText with tools with openai

Describe the bug
A clear and concise description of what the bug is.

generateText(
  messages: [...],
  tools: {
    query_screenpipe: {
      description:
        "Query the local screenpipe instance for relevant information. You will return multiple queries under the key 'queries'.",
      // some zod schema
      parameters: screenpipeMultiQuery,
      // some function calling my api
      execute: queryScreenpipeNtimes,
    },
  },
  toolChoice: "required",
});
image

To Reproduce
Steps to reproduce the behavior:

  1. use generateText with a tool
  2. console log the .toolCalls or .toolResults props (empty)

Expected behavior
A clear and concise description of what you expected to happen.

properly fill the toolCalls and toolResults props

atm i have to JSON parse the text instead

on openai these props are properly filled

also the execute is not called at all

Accumulation and logging of previous AI responses using the Vercel AI SDK

Describe the bug
When using streamText with ollama and Vercel AI SDK, there seems to be an issue where previous messages are logged, instead of only the generated response.

I've tested this with mistral and gemma2, with the same result. The issues seems to go away when using openai, hence reporting it here instead of Vercel's SDK.

To Reproduce

npm install ai ollama-ai-provider
import { ollama } from 'ollama-ai-provider';
import { generateText } from 'ai';

(async () => {
  const { text } = await generateText({
    model: ollama('mistral'),
    system: 'You are a helpful assistant.',
    messages: [
      { role: 'user', content: 'What is 1+1?' },
      { role: 'assistant', content: '2' },
      { role: 'user', content: 'What is 2+2?' },
      { role: 'assistant', content: '4' },
      { role: 'user', content: 'What is 3+3?' },
    ],
  });

  console.log(text);
})();

Result

EXPECTED:

6

ACTUAL:

2
4
 6

Desktop (please complete the following information):

  • OS: MacOS 14.5 (23F79)
  • Browser: N/A
  • Node: v20.11.0
  • ollama-ai-provider: 0.9.0
  • vercel ai SDK: 3.2.15

This issue makes it difficult to build terminal-based chat interfaces, as previous messages are being repeatedly shown:

CleanShot 2024-06-30 at 8โ€ฏ 30 38@2x

Parsing error with `streamText`: `[AI_JSONParseError: JSON parsing failed`

Using streamText with Ollama provider yields AI_JSONParseError.

It seems like everything's working, except it tries to JSON.parse() json snippit before they're fully read into the buffer.

Any guidance or help would be appreciated. Thanks for creating this package.

Code to reproduce (taken from your examples)

import { streamText } from 'ai'
import { createOllama } from 'ollama-ai-provider'

const result = await streamText({
    maxTokens: 1024,
    messages: [
      {
        content: 'Hello!',
        role: 'user',
      },
    ],
    model: model,
    system: 'You are a helpful chatbot.',
  })

  console.log("after starting streamtext. Result:", result)

  for await (const textPart of result.textStream) {
    console.log('OLLAMA TEXT PART:', textPart)
  }
  return result

Full error:

 [AI_JSONParseError: JSON parsing failed: Text: {"model":"llama3:8b","created_at":"2024-06-24T23:49:13.635502488Z","message":{"role":"assistant","content":"Don"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.644880728Z","message":{"role":"assistant","content":"'t"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.653954767Z","message":{"role":"assistant","content":" worry"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.663009567Z","message":{"role":"assistant","content":","},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.672704366Z","message":{"role":"assistant","content":" I"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.681958526Z","message":{"role":"assistant","content":"'m"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.691041205Z","message":{"role":"assistant","content":" here"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.700136604Z","message":{"role":"assistant","content":" to"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.709239924Z","message":{"role":"assistant","content":" assist"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.723447603Z","message":{"role":"assistant","content":" you"},"done":false}
{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.750979481Z","message":{.
Error message: Unexpected token { in JSON at position 128] {
  name: 'AI_JSONParseError',
  cause: [SyntaxError: Unexpected token { in JSON at position 128],
  text: '{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.635502488Z","message":{"role":"assistant","content":"Don"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.644880728Z","message":{"role":"assistant","content":"\'t"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.653954767Z","message":{"role":"assistant","content":" worry"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.663009567Z","message":{"role":"assistant","content":","},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.672704366Z","message":{"role":"assistant","content":" I"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.681958526Z","message":{"role":"assistant","content":"\'m"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.691041205Z","message":{"role":"assistant","content":" here"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.700136604Z","message":{"role":"assistant","content":" to"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.709239924Z","message":{"role":"assistant","content":" assist"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.723447603Z","message":{"role":"assistant","content":" you"},"done":false}\n{"model":"llama3:8b","created_at":"2024-06-24T23:49:13.750979481Z","message":{'
}

Here's the StreamTextResult. Is it a problem that the reader is undefined? reader: undefined,

StreamTextResult {
  originalStream: ReadableStream {
  [Symbol(kType)]: 'ReadableStream',
  [Symbol(kState)]: <ref *1> {
  disturbed: false,
  reader: undefined,
  state: 'readable',
  storedError: undefined,
  stream: undefined,
  transfer: {
  writable: undefined,
  port1: undefined,
  port2: undefined,
  promise: undefined
},
  controller: ReadableStreamDefaultController {
  [Symbol(kType)]: 'ReadableStreamDefaultController',
  [Symbol(kState)]: {
  cancelAlgorithm: [Function: nonOpCancel],
  closeRequested: false,
  highWaterMark: 1,
  pullAgain: false,
  pullAlgorithm: [Function: nonOpPull],
  pulling: false,
  queue: [],
  queueTotalSize: 0,
  started: false,
  sizeAlgorithm: [Function],
  stream: ReadableStream {
  [Symbol(kType)]: 'ReadableStream',
  [Symbol(kState)]: [Circular *1],
  [Symbol(nodejs.webstream.isClosedPromise)]: {
  promise: Promise {
  [Symbol(async_id_symbol)]: 819253,
  [Symbol(trigger_async_id_symbol)]: 813769,
  [Symbol(kResourceStore)]: undefined,
  [Symbol(kResourceStore)]: undefined,
  [Symbol(kResourceStore)]: undefined
},
  resolve: [Function],
  reject: [Function]
},
  [Symbol(nodejs.webstream.controllerErrorFunction)]: [Function: bound error]
}
}
}
},
  [Symbol(nodejs.webstream.isClosedPromise)]: {
  promise: Promise {
  [Symbol(async_id_symbol)]: 819253,
  [Symbol(trigger_async_id_symbol)]: 813769,
  [Symbol(kResourceStore)]: undefined,
  [Symbol(kResourceStore)]: undefined,
  [Symbol(kResourceStore)]: undefined
},
  resolve: [Function],
  reject: [Function]
},
  [Symbol(nodejs.webstream.controllerErrorFunction)]: [Function: bound error]
},
  warnings: [],
  rawResponse: {
  headers: {
  alt-svc: 'h3=":443"; ma=86400',
  cf-cache-status: 'DYNAMIC',
  cf-ray: '8990cf0e9dd2c551-SEA',
  connection: 'keep-alive',
  content-type: 'application/x-ndjson',
  date: 'Tue, 25 Jun 2024 00:23:30 GMT',
  nel: '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}',
  report-to: '{"endpoints":[{"url":"https:\\/\\/a.nel.cloudflare.com\\/report\\/v4?s=Bl6yODzpXFHa%2BBQ5SOg8MhYQnRBeHX39%2F%2Bt9L%2B1pQP3s8fvJoOxP%2BR2%2BzpTY2xJiQpefPoo%2BVq4oqyZXa7HBqc7FwPFc0HlJzPWbwmudhT2sxK0CE2QGCzPZgjFeydCaaQ%3D%3D"}],"group":"cf-nel","max_age":604800}',
  server: 'cloudflare',
  transfer-encoding: 'chunked'
}
}

Environment

  • Next.js Pages router
  • npm view ollama-ai-provider version --> 0.7.0
  • npm view ai version --> 3.2.8

ollama tool calling with Vercel's AI SDK

the AI SDK says that in order to call tools using the LLM, The model needs to support calling tools.

using this Ollama Provider, which ollama models are capable of tool calling?

Issue when using the streamText

While using the streamText function from the Vercel AI-SDK I noticied there was an issue which made streaming text not work.

The model is attempting to run request: http://127.0.0.1:11434/api/chat which results in:

AI_APICallError: Failed to process successful response
url: "http://127.0.0.1:11434/api/chat"

I don't why that is. Is it because it may have to hit the /generate endpoint for the streaming to worK? Anyways help would be very much appreciated it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.