Giter Club home page Giter Club logo

orhanerday / open-ai Goto Github PK

View Code? Open in Web Editor NEW
2.1K 41.0 269.0 2.1 MB

OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported)

Home Page: https://orhanerday.gitbook.io/openai-php-api-1/

License: MIT License

PHP 100.00%
openai gpt-3 laravel php dall-e dalle2 openai-api cakephp symfony yii

open-ai's Introduction

OpenAI API Client in PHP



ChatGPT API is currently supported, click here for the implementation introductions.


A message from creator,
Thank you for visiting the @orhanerday/open-ai repository! If you find this repository helpful or useful, we encourage you to star it on GitHub. Starring a repository is a way to show your support for the project. It also helps to increase the visibility of the project and to let the community know that it is valuable. Thanks again for your support and we hope you find the repository useful!

Orhan



Latest Version on Packagist Total Downloads



orhanerday-open-ai-logo



Featured in

Jetbrains Blog

Laravel News

日思录

logo_new

Comparison With Other Packages

Project Name Required PHP Version (Lower is better) Description Type (Official / Community) Support
orhanerday/open-ai PHP 7.4+ Most downloaded, forked, contributed, huge community supported, and used PHP SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. Community Available, (Community driven Discord Server or personal mail [email protected])
openai-** /c***t PHP 8.1+ OpenAI PHP API client. Community -

About this package

Fully open-source and secure community-maintained, PHP SDK for accessing the OpenAI GPT-3 API.

For more information, you can read laravel news blog post.

Free support is available. Join our discord server

To get started with this package, you'll first want to be familiar with the OpenAI API documentation and examples. Also you can get help from our discord channel that called #api-support

News

  • orhanerday/open-ai added to community libraries php section.
  • orhanerday/open-ai featured on PHPStorm blog post, thanks JetBrains!

Requires PHP 7.4+

Join our discord server

Discord Banner 2

Click here to join the Discord server

Support this project

As you may know, OpenAI PHP is an open-source project wrapping tool for OpenAI. We rely on the support of our community to continue developing and maintaining the project, and one way that you can help is by making a donation.

Donations allow us to cover expenses such as hosting costs(for testing), development tools, and other resources that are necessary to keep the project running smoothly. Every contribution, no matter how small, helps us to continue improving OpenAI PHP for everyone.

If you have benefited from using OpenAI PHP and would like to support its continued development, we would greatly appreciate a donation of any amount. You can make a donation through;

Thank you for considering a donation to Orhanerday/OpenAI PHP SDK. Your support is greatly appreciated and helps to ensure that the project can continue to grow and improve.

Sincerely,

Orhan Erday / Creator.

Documentation

Please visit https://orhanerday.gitbook.io/openai-php-api-1/

Endpoint Support

Installation

You can install the package via composer:

composer require orhanerday/open-ai

Quick Start ⚡

Before you get starting, you should set OPENAI_API_KEY as ENV key, and set OpenAI key as env value with the following commands;

Powershell

$Env:OPENAI_API_KEY = "sk-gjtv....."

Cmd

set OPENAI_API_KEY=sk-gjtv.....

Linux or macOS

export OPENAI_API_KEY=sk-gjtv.....

Getting issues while setting up env? Please read the article or you can check my StackOverflow answer for the Windows® ENV setup.

Create your index.php file and paste the following code part into the file.

<?php

require __DIR__ . '/vendor/autoload.php'; // remove this line if you use a PHP Framework.

use Orhanerday\OpenAi\OpenAi;

$open_ai_key = getenv('OPENAI_API_KEY');
$open_ai = new OpenAi($open_ai_key);

$chat = $open_ai->chat([
   'model' => 'gpt-3.5-turbo',
   'messages' => [
       [
           "role" => "system",
           "content" => "You are a helpful assistant."
       ],
       [
           "role" => "user",
           "content" => "Who won the world series in 2020?"
       ],
       [
           "role" => "assistant",
           "content" => "The Los Angeles Dodgers won the World Series in 2020."
       ],
       [
           "role" => "user",
           "content" => "Where was it played?"
       ],
   ],
   'temperature' => 1.0,
   'max_tokens' => 4000,
   'frequency_penalty' => 0,
   'presence_penalty' => 0,
]);


var_dump($chat);
echo "<br>";
echo "<br>";
echo "<br>";
// decode response
$d = json_decode($chat);
// Get Content
echo($d->choices[0]->message->content);

Run the server with the following command

php -S localhost:8000 -t .

NVIDIA NIM INTEGRATION

orhanerday/open-ai supports Nvidia NIM. The below example is MixtralAI. Check https://build.nvidia.com/explore/discover for more examples.

<?php

require __DIR__ . '/vendor/autoload.php'; // remove this line if you use a PHP Framework.

use Orhanerday\OpenAi\OpenAi;

$nvidia_ai_key = getenv('NVIDIA_AI_API_KEY');
error_log($open_ai_key);
$open_ai = new OpenAi($nvidia_ai_key);
$open_ai->setBaseURL("https://integrate.api.nvidia.com");
$chat = $open_ai->chat([
    'model' => 'mistralai/mixtral-8x7b-instruct-v0.1',
    'messages' => [["role" => "user", "content" => "Write a limmerick about the wonders of GPU computing."]],
    'temperature' => 0.5,
    'max_tokens' => 1024,
    'top_p' => 1,
]);

var_dump($chat);
echo "<br>";
echo "<br>";
echo "<br>";
// decode response
$d = json_decode($chat);
// Get Content
echo ($d->choices[0]->message->content);

Usage

Load your key from an environment variable.

According to the following code $open_ai is the base variable for all open-ai operations.

use Orhanerday\OpenAi\OpenAi;

$open_ai = new OpenAi(env('OPEN_AI_API_KEY'));

Requesting organization

For users who belong to multiple organizations, you can pass a header to specify which organization is used for an API request. Usage from these API requests will count against the specified organization's subscription quota.

$open_ai_key = getenv('OPENAI_API_KEY');
$open_ai = new OpenAi($open_ai_key);
$open_ai->setORG("org-IKN2E1nI3kFYU8ywaqgFRKqi");

Base URL

You can specify Origin URL with setBaseURL() method;

$open_ai_key = getenv('OPENAI_API_KEY');
$open_ai = new OpenAi($open_ai_key,$originURL);
$open_ai->setBaseURL("https://ai.example.com/");

Use Proxy

You can use some proxy servers for your requests api;

$open_ai->setProxy("http://127.0.0.1:1086");

Set header

$open_ai->setHeader(["Connection"=>"keep-alive"]);

Get cURL request info

!!! WARNING:Your API key will expose if you add this method to your code, therefore remove the method before deployment. Be careful !

You can get cURL info after the request.

$open_ai = new OpenAi($open_ai_key);
echo $open_ai->listModels(); // you should execute the request FIRST!
var_dump($open_ai->getCURLInfo()); // You can call the request

Chat (as known as ChatGPT API)

Given a chat conversation, the model will return a chat completion response.

$complete = $open_ai->chat([
   'model' => 'gpt-3.5-turbo',
   'messages' => [
       [
           "role" => "system",
           "content" => "You are a helpful assistant."
       ],
       [
           "role" => "user",
           "content" => "Who won the world series in 2020?"
       ],
       [
           "role" => "assistant",
           "content" => "The Los Angeles Dodgers won the World Series in 2020."
       ],
       [
           "role" => "user",
           "content" => "Where was it played?"
       ],
   ],
   'temperature' => 1.0,
   'max_tokens' => 4000,
   'frequency_penalty' => 0,
   'presence_penalty' => 0,
]);

Accessing the Element

<?php
// Dummy Response For Chat API
$j = '
{
   "id":"chatcmpl-*****",
   "object":"chat.completion",
   "created":1679748856,
   "model":"gpt-3.5-turbo-0301",
   "usage":{
      "prompt_tokens":9,
      "completion_tokens":10,
      "total_tokens":19
   },
   "choices":[
      {
         "message":{
            "role":"assistant",
            "content":"This is a test of the AI language model."
         },
         "finish_reason":"length",
         "index":0
      }
   ]
}
';

// decode response
$d = json_decode($j);

// Get Content
echo($d->choices[0]->message->content);

Completions

Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.

$complete = $open_ai->completion([
   'model' => 'gpt-3.5-turbo-instruct',
   'prompt' => 'Hello',
   'temperature' => 0.9,
   'max_tokens' => 150,
   'frequency_penalty' => 0,
   'presence_penalty' => 0.6,
]);

Stream Example

This feature might sound familiar from ChatGPT.


ChatGPT Clone Project

Video of demo:

Isimsiz.video.Clipchamp.ile.yapildi.mp4

ChatGPT clone is a simple web application powered by the OpenAI library and built with PHP. It allows users to chat with an AI language model that responds in real-time. Chat history is saved using cookies, and the project requires the use of an API key and enabled SQLite3.

Url of The ChatGPT-Clone Repo https://github.com/orhanerday/ChatGPT


Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.

$open_ai = new OpenAi(env('OPEN_AI_API_KEY'));

$opts = [
   'prompt' => "Hello",
   'temperature' => 0.9,
   "max_tokens" => 150,
   "frequency_penalty" => 0,
   "presence_penalty" => 0.6,
   "stream" => true,
];

header('Content-type: text/event-stream');
header('Cache-Control: no-cache');

$open_ai->completion($opts, function ($curl_info, $data) {
   echo $data . "<br><br>";
   echo PHP_EOL;
   ob_flush();
   flush();
   return strlen($data);
});

Add this part inside <body> of the HTML

<div id="divID">Hello</div>
<script>
   var eventSource = new EventSource("/");
   var div = document.getElementById('divID');


   eventSource.onmessage = function (e) {
      if(e.data == "[DONE]")
      {
          div.innerHTML += "<br><br>Hello";
      }
       div.innerHTML += JSON.parse(e.data).choices[0].text;
   };
   eventSource.onerror = function (e) {
       console.log(e);
   };
</script>

You should see a response like the in video;

stream-event.mp4

Edits

Creates a new edit for the provided input, instruction, and parameters

   $result = $open_ai->createEdit([
       "model" => "text-davinci-edit-001",
       "input" => "What day of the wek is it?",
       "instruction" => "Fix the spelling mistakes",
   ]);

Images (DALL·E)

All DALL·E Examples available in this repo.

Given a prompt, the model will return one or more generated images as urls or base64 encoded.

Create image

Creates an image given a prompt.

$complete = $open_ai->image([
   "prompt" => "A cat drinking milk",
   "n" => 1,
   "size" => "256x256",
   "response_format" => "url",
]);

Create image edit

Creates an edited or extended image given an original image and a prompt.

You need HTML upload for image edit or variation? Please check DALL·E Examples

$otter = curl_file_create(__DIR__ . './files/otter.png');
$mask = curl_file_create(__DIR__ . './files/mask.jpg');

$result = $open_ai->imageEdit([
    "image" => $otter,
    "mask" => $mask,
    "prompt" => "A cute baby sea otter wearing a beret",
    "n" => 2,
    "size" => "1024x1024",
]);

Create image variation

Creates a variation of a given image.

$otter = curl_file_create(__DIR__ . './files/otter.png');

$result = $open_ai->createImageVariation([
    "image" => $otter,
    "n" => 2,
    "size" => "256x256",
]);

Searches

(Deprecated)

This endpoint is deprecated and will be removed on December 3rd, 2022 OpenAI developed new methods with better performance. Learn more.

Given a query and a set of documents or labels, the model ranks each document based on its semantic similarity to the provided query.

$search = $open_ai->search([
    'engine' => 'ada',
    'documents' => ['White House', 'hospital', 'school'],
    'query' => 'the president',
]);

Embeddings

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Related guide: Embeddings

Create embeddings

$result = $open_ai->embeddings([
    "model" => "text-similarity-babbage-001",
    "input" => "The food was delicious and the waiter..."
]);

Answers

(Deprecated)

This endpoint is deprecated and will be removed on December 3rd, 2022 We’ve developed new methods with better performance. Learn more.

Given a question, a set of documents, and some examples, the API generates an answer to the question based on the information in the set of documents. This is useful for question-answering applications on sources of truth, like company documentation or a knowledge base.

$answer = $open_ai->answer([
  'documents' => ['Puppy A is happy.', 'Puppy B is sad.'],
  'question' => 'which puppy is happy?',
  'search_model' => 'ada',
  'model' => 'curie',
  'examples_context' => 'In 2017, U.S. life expectancy was 78.6 years.',
  'examples' => [['What is human life expectancy in the United States?', '78 years.']],
  'max_tokens' => 5,
  'stop' => ["\n", '<|endoftext|>'],
]);

Classifications

(Deprecated)

This endpoint is deprecated and will be removed on December 3rd, 2022 OpenAI developed new methods with better performance. Learn more.

Given a query and a set of labeled examples, the model will predict the most likely label for the query. Useful as a drop-in replacement for any ML classification or text-to-label task.

$classification = $open_ai->classification([
   'examples' => [
       ['A happy moment', 'Positive'],
       ['I am sad.', 'Negative'],
       ['I am feeling awesome', 'Positive'],
   ],
   'labels' => ['Positive', 'Negative', 'Neutral'],
   'query' => 'It is a raining day =>(',
   'search_model' => 'ada',
   'model' => 'curie',
]);

Content Moderations

Given a input text, outputs if the model classifies it as violating OpenAI's content policy.

$flags = $open_ai->moderation([
    'input' => 'I want to kill them.'
]);

Know more about Content Moderations here: OpenAI Moderations

List engines

(Deprecated)

The Engines endpoints are deprecated. Please use their replacement, Models, instead. Learn more.

Lists the currently available engines, and provides basic information about each one such as the owner and availability.

$engines = $open_ai->engines();

Audio

Text To Speech (TTS)

$result = $open_ai->tts([
    "model" => "tts-1", // tts-1-hd
    "input" => "I'm going to use the stones again. Hey, we'd be going in short-handed, you know",
    "voice" => "alloy", // echo, fable, onyx, nova, and shimmer
]);

// Save audio file
file_put_contents('tts-result.mp3', $result);

Create Transcription

Transcribes audio into the input language.

$c_file = curl_file_create(__DIR__ . '/files/en-marvel-endgame.m4a');

$result = $open_ai->transcribe([
    "model" => "whisper-1",
    "file" => $c_file,
]);

Response

{
  "text": "I'm going to use the stones again. Hey, we'd be going in short-handed, you know. Look, he's still got the stones, so... So let's get them. Use them to bring everyone back. Just like that? Yeah, just like that. Even if there's a small chance that we can undo this, I mean, we owe it to everyone who's not in this room to try. If we do this, how do we know it's going to end any differently than it did before? Because before you didn't have me. Hey, little girl, everybody in this room is about that superhero life. And if you don't mind my asking, where the hell have you been all this time? There are a lot of other planets in the universe. But unfortunately, they didn't have you guys. I like this one. Let's go get this son of a bitch."
}

Create Translation

Translates audio into English.

I use Turkish voice for translation thanks to famous science YouTuber Barış Özcan

$c_file = curl_file_create(__DIR__ . '/files/tr-baris-ozcan-youtuber.m4a');

$result = $open_ai->translate([
    "model" => "whisper-1",
    "file" => $c_file,
]);

Response

{
  "text": "GPT-3. Last month, the biggest leap in the world of artificial intelligence in recent years happened silently. Maybe the biggest leap of all time. GPT-3's beta version was released by OpenAI. When you hear such a sentence, you may think, what kind of leap is this? But be sure, this is the most advanced language model with the most advanced language model with the most advanced language ability. It can answer these artificial intelligence questions, it can translate and even write poetry. Those who have gained access to the API or API of GPT-3 have already started to make very interesting experiments. Let's look at a few examples together. Let's start with an example of aphorism. This site produces beautiful words that you can tweet. Start to actually do things with your words instead of just thinking about them."
}

Need HTML upload for audio? Check this section and change api references. Example :

...
    echo $open_ai->translate(
        [
            "purpose" => "answers",
            "file" => $c_file,
        ]
    );
...
// OR
...
    echo $open_ai->transcribe(
        [
            "purpose" => "answers",
            "file" => $c_file,
        ]
    );
...

Files

Files are used to upload documents that can be used across features like Answers, Search, and Classifications

List files

Returns a list of files that belong to the user's organization.

$files = $open_ai->listFiles();

Upload file

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

$c_file = curl_file_create(__DIR__ . 'files/sample_file_1.jsonl');
$result = $open_ai->uploadFile([
            "purpose" => "answers",
            "file" => $c_file,
]);

Upload file with HTML Form

<form action="index.php" method="post" enctype="multipart/form-data">
    Select file to upload:
    <input type="file" name="fileToUpload" id="fileToUpload">
    <input type="submit" value="Upload File" name="submit">
</form>
<?php
require __DIR__ . '/vendor/autoload.php';

use Orhanerday\OpenAi\OpenAi;

if ($_SERVER['REQUEST_METHOD'] == 'POST') {
    ob_clean();
    $open_ai = new OpenAi(env('OPEN_AI_API_KEY'));
    $tmp_file = $_FILES['fileToUpload']['tmp_name'];
    $file_name = basename($_FILES['fileToUpload']['name']);
    $c_file = curl_file_create($tmp_file, $_FILES['fileToUpload']['type'], $file_name);

    echo "[";
    echo $open_ai->uploadFile(
        [
            "purpose" => "answers",
            "file" => $c_file,
        ]
    );
    echo ",";
    echo $open_ai->listFiles();
    echo "]";

}

Delete file

$result = $open_ai->deleteFile('file-xxxxxxxx');

Retrieve file

$file = $open_ai->retrieveFile('file-xxxxxxxx');

Retrieve file content

$file = $open_ai->retrieveFileContent('file-xxxxxxxx');

Fine-tunes

Manage fine-tuning jobs to tailor a model to your specific training data.

Create fine-tune

$result = $open_ai->createFineTune([
       "model" => "gpt-3.5-turbo-1106",
       "training_file" => "file-U3KoAAtGsjUKSPXwEUDdtw86",
]);

List fine-tune

$fine_tunes = $open_ai->listFineTunes();

Retrieve fine-tune

$fine_tune = $open_ai->retrieveFineTune('ft-AF1WoRqd3aJAHsqc9NY7iL8F');

Cancel fine-tune

$result = $open_ai->cancelFineTune('ft-AF1WoRqd3aJAHsqc9NY7iL8F');

List fine-tune events

$fine_tune_events = $open_ai->listFineTuneEvents('ft-AF1WoRqd3aJAHsqc9NY7iL8F');

Delete fine-tune model

$result = $open_ai->deleteFineTune('curie:ft-acmeco-2021-03-03-21-44-20');

Retrieve engine

(Deprecated)

Retrieves an engine instance, providing basic information about the engine such as the owner and availability.

$engine = $open_ai->engine('davinci');

Models

List and describe the various models available in the API.

List models

Lists the currently available models, and provides basic information about each one such as the owner and availability.

$result = $open_ai->listModels();

Retrieve model

Retrieves a model instance, providing basic information about the model such as the owner and permissioning.

$result = $open_ai->retrieveModel("text-ada-001");

Printing results i.e. $search

echo $search;

Assistants (beta)

Allows you to build AI assistants within your own applications.

Create assistant

Create an assistant with a model and instructions.

$data = [
    'model' => 'gpt-3.5-turbo',
    'name' => 'my assistant',
    'description' => 'my assistant description',
    'instructions' => 'you should cordially help me',
    'tools' => [],
    'file_ids' => [],
];

$assistant = $open_ai->createAssistant($data);

Retrieve assistant

$assistantId = 'asst_zT1LLZ8dWnuFCrMFzqxFOhzz';

$assistant = $open_ai->retrieveAssistant($assistantId);

Modify assistant

$assistantId = 'asst_zT1LLZ8dWnuFCrMFzqxFOhzz';
$data = [
    'name' => 'my modified assistant',
    'instructions' => 'you should cordially help me again',
];

$assistant = $open_ai->modifyAssistant($assistantId, $data);

Delete assistant

$assistantId = 'asst_DgiOnXK7nRfyvqoXWpFlwESc';

$assistant = $open_ai->deleteAssistant($assistantId);

Lists assistants

Returns a list of assistants.

$query = ['limit' => 10];

$assistants = $open_ai->listAssistants($query);

Create assistant file

Create an assistant file by attaching a File to an assistant.

$assistantId = 'asst_zT1LLZ8dWnuFCrMFzqxFOhzz';
$fileId = 'file-jrNZZZBAPGnhYUKma7CblGoR';

$file = $open_ai->createAssistantFile($assistantId, $fileId);

Retrieve assistant file

$assistantId = 'asst_zT1LLZ8dWnuFCrMFzqxFOhzz';
$fileId = 'file-jrNZZZBAPGnhYUKma7CblGoR';

$file = $open_ai->retrieveAssistantFile($assistantId, $fileId);

Delete assistant file

$assistantId = 'asst_zT1LLZ8dWnuFCrMFzqxFOhzz';
$fileId = 'file-jrNZZZBAPGnhYUKma7CblGoR';

$file = $open_ai->deleteAssistantFile($assistantId, $fileId);

List assistant files

Returns a list of assistant files.

$assistantId = 'asst_zT1LLZ8dWnuFCrMFzqxFOhzz';
$query = ['limit' => 10];

$files = $open_ai->listAssistantFiles($assistantId, $query);

Threads (beta)

Create threads that assistants can interact with.

Create thread

$data = [
    'messages' => [
        [
            'role' => 'user',
            'content' => 'Hello, what is AI?',
            'file_ids' => [],
        ],
    ],
];

$thread = $open_ai->createThread($data);

Retrieve thread

$threadId = 'thread_YKDArENVWFDO2Xz3POifFYlp';

$thread = $open_ai->retrieveThread($threadId);

Modify thread

$threadId = 'thread_YKDArENVWFDO2Xz3POifFYlp';
$data = [
    'metadata' => ['test' => '1234abcd'],
];

$thread = $open_ai->modifyThread($threadId, $data);

Delete thread

$threadId = 'thread_YKDArENVWFDO2Xz3POifFYlp';

$thread = $open_ai->deleteThread($threadId);

Messages (beta)

Create messages within threads.

Create message

$threadId = 'thread_YKDArENVWFDO2Xz3POifFYlp';
$data = [
    'role' => 'user',
    'content' => 'How does AI work? Explain it in simple terms.',
];

$message = $open_ai->createThreadMessage($threadId, $data);

Retrieve message

$threadId = 'thread_d86alfR2rfF7rASyV4V7hicz';
$messageId = 'msg_d37P5XgREsm6BItOcppnBO1b';

$message = $open_ai->retrieveThreadMessage($threadId, $messageId);

Modify message

$threadId = 'thread_d86alfR2rfF7rASyV4V7hicz';
$messageId = 'msg_d37P5XgREsm6BItOcppnBO1b';
$data = [
    'metadata' => ['test' => '1234abcd'],
];

$message = $open_ai->modifyThreadMessage($threadId, $messageId, $data);

Lists messages

Returns a list of messages for a given thread.

$threadId = 'thread_d86alfR2rfF7rASyV4V7hicz';
$query = ['limit' => 10];

$messages = $open_ai->listThreadMessages($threadId, $query);

Retrieve message file

$threadId = 'thread_d86alfR2rfF7rASyV4V7hicz';
$messageId = 'msg_CZ47kAGZugAfeHMX6bmJIukP';
$fileId = 'file-CRLcY63DiHphWuBrmDWZVCgA';

$file = $open_ai->retrieveMessageFile($threadId, $messageId, $fileId);

List message files

Returns a list of message files.

$threadId = 'thread_d86alfR2rfF7rASyV4V7hicz';
$messageId = 'msg_CZ47kAGZugAfeHMX6bmJIukP';
$query = ['limit' => 10];

$files = $open_ai->listMessageFiles($threadId, $messageId, $query);

Runs (beta)

Represents an execution run on a thread.

Create run

$threadId = 'thread_d86alfR2rfF7rASyV4V7hicz';
$data = ['assistant_id' => 'asst_zT1LLZ8dWnuFCrMFzqxFOhzz'];

$run = $open_ai->createRun($threadId, $data);

Retrieve run

$threadId = 'thread_JZbzCYpYgpNb79FNeneO3cGI';
$runId = 'run_xBKYFcD2Jg3gnfrje6fhiyXj';

$run = $open_ai->retrieveRun($threadId, $runId);

Modify run

$threadId = 'thread_JZbzCYpYgpNb79FNeneO3cGI';
$runId = 'run_xBKYFcD2Jg3gnfrje6fhiyXj';
$data = [
    'metadata' => ['test' => 'abcd1234'],
];

$run = $open_ai->modifyRun($threadId, $runId, $data);

Lists runs

Returns a list of runs belonging to a thread.

$threadId = 'thread_JZbzCYpYgpNb79FNeneO3cGI';
$query = ['limit' => 10];

$runs = $open_ai->listRuns($threadId, $query);

Submit tool outputs

When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

$threadId = 'thread_JZbzCYpYgpNb79FNeneO3cGI';
$runId = 'run_xBKYFcD2Jg3gnfrje6fhiyXj';
$outputs = [
    'tool_outputs' => [
        ['tool_call_id' => 'call_abc123', 'output' => '28C'],
    ],
];

$run = $open_ai->submitToolOutputs($threadId, $runId, $outputs);

Cancel run

Cancels a run that is "in_progress".

$threadId = 'thread_JZbzCYpYgpNb79FNeneO3cGI';
$runId = 'run_xBKYFcD2Jg3gnfrje6fhiyXj';

$run = $open_ai->cancelRun($threadId, $runId);

Create thread and run

Create a thread and run it in one request.

$data = [
    'assistant_id' => 'asst_zT1LLZ8dWnuFCrMFzqxFOhzz',
    'thread' => [
        'messages' => [
            [
                'role' => 'user',
                'content' => 'Hello, what is AI?',
                'file_ids' => [],
            ],
        ],
    ],
];

$run = $open_ai->createThreadAndRun($data);

Retrieve run step

Retrieves a step in execution of a run.

$threadId = 'thread_JZbzCYpYgpNb79FNeneO3cGI';
$runId = 'run_xBKYFcD2Jg3gnfrje6fhiyXj';
$stepId = 'step_kwLG0vPQjqVyQHVoL7GVK3aG';

$step = $open_ai->retrieveRunStep($threadId, $runId, $stepId);

List run steps

Returns a list of run steps belonging to a run.

$threadId = 'thread_JZbzCYpYgpNb79FNeneO3cGI';
$runId = 'run_xBKYFcD2Jg3gnfrje6fhiyXj';
$query = ['limit' => 10];

$steps = $open_ai->listRunSteps($threadId, $runId, $query);

Testing

To run all tests:

composer test

To run only those tests that work for most user (exclude those that require a missing folder or that hit deprecated endpoints no longer available to most users):

./vendor/bin/pest --group=working

Changelog

Please see CHANGELOG for more information on what has changed recently.

Contributing

Please see CONTRIBUTING for details.

Security Vulnerabilities

Please report security vulnerabilities to [email protected]

Credits

License

The MIT License (MIT). Please see License File for more information.

Donation

Buy Me A Coffee

Star History

Star History Chart

open-ai's People

Contributors

adetch avatar ali-wells avatar assert6 avatar bashar94 avatar cotrufoa avatar dependabot[bot] avatar dougkulak avatar dsampaolo avatar fireqong avatar github-actions[bot] avatar gouguoyin avatar joacir avatar johanvanhelden avatar mahadsprouttech avatar marcosegato avatar muchwat avatar mydnic avatar orhan-cmd avatar orhanerday avatar reply2future avatar slaffik avatar yakupseymen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-ai's Issues

'answers' is not one of ['fine-tune'] - 'purpose'

Describe the bug

I am using :

				$c_file = curl_file_create($file);
				$result =$open_ai->uploadFile([
					"purpose" => "answers",
					"file"    => $c_file,
				]);

Results :
'answers' is not one of ['fine-tune'] - 'purpose'

To Reproduce

  1. $c_file = curl_file_create($file);
  2. $result =$open_ai->uploadFile([
    "purpose" => "answers",
    "file" => $c_file,
    ]);
  3. Results :
    'answers' is not one of ['fine-tune'] - 'purpose'

Code snippets

No response

OS

win

PHP version

PHP 8

Library version

openai v3.0.1

RESOLVED: Method chat() undefined. Did not update library with composer correctly.

Describe the bug

Updated library with:

composer update orhanerday/open-ai

Receive following error

Fatal error: Uncaught Error: Call to undefined method Orhanerday\OpenAi\OpenAi::chat() in ...

when running example "Chat" endpoint code.

RESOLVED:

FYI: Other noobs, updating library requires following shell command ("composer require ...", not "composer update..."):

composer require orhanerday/open-ai

Maybe add a note in README with brief upgrade instructions for others new to composer? Thanks.

Change BaseURL from https://api.openai.com/v1

Describe the feature or improvement you're requesting

There are several interesting analytics products like https://www.helicone.ai/ that allow for easy tracking of usage, and even metered billing. To make it work, we need to change the base URL from the standard OpenAI's API, to the middleman. It would be a nice feature to be able to define this as a constant or something like it.

Additional context

No response

Call to any function that sends curl request throws exception in v4.5

Describe the bug

On calling any function which sends a curl request, e.g completions, we get an exception in v4.5
curl_getinfo(): supplied resource is not a valid cURL handle resource
This is because in the function sendRequest, curl_getinfo is being called after curl_close() which throws exception thus breaks everything. This makes version 4.5 to not work at all.

To Reproduce

Call any end point.e.g completions using open-ai version 4.5

Code snippets

/**
     * @param string $url
     * @param string $method
     * @param array $opts
     * @return bool|string
     */
    private function sendRequest(string $url, string $method, array $opts = [])
    {
        $post_fields = json_encode($opts);

        if (array_key_exists('file', $opts) || array_key_exists('image', $opts)) {
            $this->headers[0] = $this->contentTypes["multipart/form-data"];
            $post_fields = $opts;
        } else {
            $this->headers[0] = $this->contentTypes["application/json"];
        }
        $curl_info = [
            CURLOPT_URL => $url,
            CURLOPT_RETURNTRANSFER => true,
            CURLOPT_ENCODING => '',
            CURLOPT_MAXREDIRS => 10,
            CURLOPT_TIMEOUT => $this->timeout,
            CURLOPT_FOLLOWLOCATION => true,
            CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
            CURLOPT_CUSTOMREQUEST => $method,
            CURLOPT_POSTFIELDS => $post_fields,
            CURLOPT_HTTPHEADER => $this->headers,
        ];

        if ($opts == []) {
            unset($curl_info[CURLOPT_POSTFIELDS]);
        }

        if (! empty($this->proxy)) {
            $curl_info[CURLOPT_PROXY] = $this->proxy;
        }

        if (array_key_exists('stream', $opts) && $opts['stream']) {
            $curl_info[CURLOPT_WRITEFUNCTION] = $this->stream_method;
        }

        $curl = curl_init();

        curl_setopt_array($curl, $curl_info);
        $response = curl_exec($curl);
        curl_close($curl);

        $info = curl_getinfo($curl);
        $this->curlInfo = $info;

        return $response;
    }

OS

Ubuntu 20.04 or any

PHP version

PHP 7.4.11 or any

Library version

openai v4.5

chat api sometimes unreasonably "exceeds" 4000 completion tokens

Describe the bug

I'm having an issue with the chat api saying I am exceeding the maximum length of 4000 tokens despite my request clearly not exceeding even 300 tokens. I tried the exact same request using python's openai api and it worked fine. It seems past a certain point in the length of conversation will cause a bug in the count of completion token.

To Reproduce

This is the error after running the code:
["error"]=>
object(stdClass)#28 (4) {
["message"]=>
string(189) "This model's maximum context length is 4097 tokens. However, you requested 4111 tokens (111 in the messages, 4000 in the completion). Please reduce the length of the messages or completion."
["type"]=>
string(21) "invalid_request_error"
["param"]=>
string(8) "messages"
["code"]=>
string(23) "context_length_exceeded"
}
}

Code snippets

require_once __DIR__ . '/vendor/autoload.php';
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
use Orhanerday\OpenAi\OpenAi;

$open_ai = new OpenAi($_ENV['OPENAI_API_KEY']);
$chat = $open_ai->chat([
    'model' => 'gpt-3.5-turbo',
    'messages' => [
        [
            "role"=>"system",
            "content"=>"summarize conversation within 40 words"
        ],
        [
            "role"=>"user",
            "content"=>" Anna: Hi my name is Anna User: hello how do i cook an egg? Anna: To cook an egg, bring a pot of water to boil, reduce heat, and gently add an egg for 3-5 minutes. User: what about a potato? Anna: To cook a potato, scrub it clean, poke a few holes in it, and bake in the oven at 400°F for about an hour, or until tender."
        ]
    ],
    'temperature' => 1.0,
    'max_tokens' => 4000,
    'frequency_penalty' => 0,
    'presence_penalty' => 0,
 ]);

$d = json_decode($chat);
var_dump($d);
// Get Content
echo($d->choices[0]->message->content);

OS

Windows

PHP version

PHP 8.2.4

Library version

openai V4.7

Is there any strategy for performing throttled parallel calls using this library?

Describe the feature or improvement you're requesting

I was just looking at this here https://github.com/openai/openai-cookbook/blob/main/examples/api_request_parallel_processor.py

It essentially allows to queue/batch multiple calls to be run in parallel while observing the rate limits.. super useful if you need to run many calls at once and don't want to run into errors. Would something like this be feasible in PHP I wonder?

Additional context

No response

return false in request

Describe the bug

To use this library in thinkphp5, the returned information is false, and nothing is returned to me except false; I have set "set OPENAI_API_KEY=sk-gjtv...."
I am sure there is nothing wrong with my code, if this library is not referenced, then it should directly display an error

To Reproduce

1

Code snippets

<?php

namespace app\index\controller;

use think\Controller;
use Orhanerday\OpenAi\OpenAi;

class Index extends Controller
{
    public function index()
    {
        $open_ai_key = config("openai.OPENAI_API_KEY");
        $open_ai = new OpenAi($open_ai_key);
        //dump($open_ai_key); // have

        // $complete = $open_ai->completion([
        //     "model"   =>  "code-davinci-002",
        //     "prompt"  =>  "",
        //     "temperature" => 0,
        //     "max_tokens" => 64,
        //     "top_p" => 1,
        //     "frequency_penalty" => 0,
        //     "presence_penalty" => 0,
        //     "stop" => ["\"\"\""],
        // ]);
        $complete = $open_ai->completion([
            'model' => 'davinci',
            'prompt' => 'Hello',
            'temperature' => 0.9,
            'max_tokens' => 150,
            'frequency_penalty' => 0,
            'presence_penalty' => 0.6,
        ]);

        dump($complete); //that is "false"
        //return $this->fetch(["msg"=>$complete]);
    }
}

OS

win10

PHP version

7.4.3

Library version

3.4

GPT-3.5-Turbo logit_bias parameter

Describe the bug

Hi,

First of all thanks for your work on this.
I encounter a problem when i try to send an object with various tokenized words with their bias with the "logit_bias" parameter. (https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias)

To Reproduce

  1. Use your quickstart template
  2. Add the "logit_bias" parameter and any kind of json object, or even 'null' which from the doc is the accepted default.
  3. The response is an error message stating that whatever we put as a value for "logit_bias" is not of type 'object' - 'logis_bias'

Code snippets

<?php

require __DIR__ . '/vendor/autoload.php'; // remove this line if you use a PHP Framework.

use Orhanerday\OpenAi\OpenAi;

$open_ai_key = getenv('OPENAI_API_KEY');
$open_ai = new OpenAi($open_ai_key);

$complete = $open_ai->chat([
   'model' => 'gpt-3.5-turbo',
   'messages' => [
       [
           "role" => "system",
           "content" => "You are a helpful assistant."
       ],
       [
           "role" => "user",
           "content" => "Who won the world series in 2020?"
       ],
       [
           "role" => "assistant",
           "content" => "The Los Angeles Dodgers won the World Series in 2020."
       ],
       [
           "role" => "user",
           "content" => "Where was it played?"
       ],
   ],
   'temperature' => 1.0,
   'max_tokens' => 4000,
   'frequency_penalty' => 0,
   'presence_penalty' => 0,
   'logit_bias' => null // or { 3789: 10 } or anything of the sort
]);

var_dump($complete);

OS

Windows

PHP version

PHP 8.1

Library version

open-ai v4.7.1

How to get only content in output

Describe the bug

I write controller CI4:

 public function openAIPost()
    {
        $response = array();
        if ($this->request->getMethod() === 'post') {
            $question = $this->request->getPost('question');
            $open_ai_key = getenv('OPENAI_API_KEY');
            $open_ai = new OpenAi($open_ai_key);
            $complete = $open_ai->chat([
                'model' => 'gpt-3.5-turbo',
                'messages' => [
                    [
                        "role" => "user",
                        "content" => $question
                    ]
                ],
                'temperature' => 1.0,
                'max_tokens' => 10,
                'frequency_penalty' => 0,
                'presence_penalty' => 0,
            ]);
            $response['complete'] = $complete;
        }
        echo json_encode($response);
    }

But I get in output:

{"id":"chatcmpl-6xxhQ2FLm08VhE8KKHsTpilkvpMTS","object":"chat.completion","created":1679748856,"model":"gpt-3.5-turbo-0301","usage":{"prompt_tokens":9,"completion_tokens":10,"total_tokens":19},"choices":[{"message":{"role":"assistant","content":"This is a test of the AI language model."},"finish_reason":"length","index":0}]}

To Reproduce

Can you help us How to get only content from response?

Code snippets

No response

OS

macOS

PHP version

8.2

Library version

openai v.3.0.1

Chat GTP 4 model integration

Describe the feature or improvement you're requesting

I need to integrate Chat GTP 4 model in this repo, How can I do this?
I was trying but didn't get any response from API.
Please let me know when you are planning to integrate the latest Chat GPT4 model.
I will appreciate you if you respond to me.
Thanks

Additional context

I need to integrate Chat GTP 4 model in this repo, How can I do this?
I was trying but didn't get any response from API.
Please let me know when you are planning to integrate the latest Chat GPT4 model.
I will appreciate you if you respond to me.
Thanks

PHPStan shows parameter error

Describe the bug

PHPStan shows error:
Parameter #2 $stream of method Orhanerday\OpenAi\OpenAi::chat() expects null, Closure given.

Secondary parameter $stream of chat method is typed null.

To Reproduce

  1. use PHPStan Level 5
  2. run it

Code snippets

$open_ai->chat($options, function ($curl_info, $data) {
    echo $data . PHP_EOL;
    echo PHP_EOL;
    ob_flush();
    flush();
    return strlen($data);
});

OS

macOS

PHP version

PHP 8.0

Library version

openai V4.7.1

Function Calling?

Describe the feature or improvement you're requesting

Hi there! Glad to be using your wonderful project!
As you know for sure, OpenAI released some updates to the API, including Function Calling.
Is it possible to use Function Calling with the project right now?
If not, could you possibly add it?
Thanks anyway. Much appreciated work.

Additional context

No response

Completions requesting gpt-4 models timeout (Error: Gateway timeout.)

Describe the bug

Calls to $openai->chat with large messages and max_tokens length (~4K + ~4K respectively for a total of 8K) are often timing out. Meaning php script that calls function exits after 10 minutes while waiting for response without receiving response and some times receiving response "Error: Gateway timeout." Calling the same script from a web browser will fail earlier before any response to endpoint call received.

This does not happen every time, but has occurred almost every time $openai->chat is called with large context.

Is there:

  1. An alternative way to request large context completion that is less likely to fail in this manner?

  2. A way to request a timeout -- meaning a termination of the endpoint request along with explicit error response if OpenAI endpoint does not respond within a specified amount of time?

  3. A way to keep the API call and/or calling php script alive longer?

To Reproduce

$chat = $openai->chat([
	'model' => 'gpt-4-0314',
	'messages' => "MESSAGE ~4K tokens in length)
	'temperature' => 0,
	'max_tokens' => 4l000,
	'frequency_penalty' => 0,
	'presence_penalty' => 0
]);
echo json_decode($chat);

Code snippets

No response

OS

Linux

PHP version

PHP 7.6

Library version

openai v3

Uncaught Error: Call to undefined function env()

Describe the bug

I receive this error. I'm running PHP 8.2

To Reproduce

require_once("vendor/autoload.php");
use Orhanerday\OpenAi\OpenAi;

$open_ai = new OpenAi(env('mykey'));

Code snippets

No response

OS

Hosting

PHP version

PHP 8.2

Library version

openai 3.1

I get nothing as response

Describe the bug

get ge nothing in response, when i var_dump, i have a boolean false

test.php:25:boolean false

To Reproduce

reproduced code

Code snippets

$prompt = "Hello";

$complete = $open_ai->completion([
    'model' => 'davinci',
    'prompt' => $prompt,
    'temperature' => 0.7,
    'max_tokens' => 150,
    'frequency_penalty' => 0,
    'presence_penalty' => 0,
]);

var_dump($complete);

OS

windows

PHP version

8,2

Library version

openai v3.5

why the answer is so short ?? is there any problem??

Describe the bug

why the answer is so short ?? is there any problem??

To Reproduce

$open_ai = new OpenAi($open_ai_key);

    $complete = $open_ai->completion([
        'model' => 'text-davinci-003',
        'prompt' => $prompt,
        'temperature' => 0.9,
        //'max_tokens' => 4096,
        'frequency_penalty' => 0,
        'presence_penalty' => 0.6,
    ]);

$prompt = "我家有只狗叫三万,你猜猜它为什么叫这个名字";
I believe the answer is long, but the answer is :
"{"id":"cmpl-.......","object":"text_completion","created":1676702955,"model":"text-davinci-003","choices":[{"text":"?\n\n这可能是你家","index":0,"logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":44,"completion_tokens":16,"total_tokens":60}}
"
why the answer is so short ?? is there any problem??

Code snippets

No response

OS

centos7.2

PHP version

PHP 7.4

Library version

openai3

Markdown while typing possible?

Describe the feature or improvement you're requesting

Well I got this beauty of an package working. But I run into 1 problem I cannot seem to get my head around.
I am working with the SSE and that works fine, i get all the bits and pieces and I get it in a div.
But when ChatGPT is outputting markdown styled content (like a table), I cannot apply markdown while it's still "typing".
I can only manage to do this after ChatGPT is done.

Can anybody share a light on this?

Additional context

No response

The model won't print/return any results.

Hi there, I'm trying to set up a working example using Laravel but when I dd($complete) I get false.

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Orhanerday\OpenAi\OpenAi;

class HomeController extends Controller
{
    public function index() {
        $open_ai = new OpenAi(env('OPEN_AI_API_KEY'));
        $engines = $open_ai->engines();

        $complete = $open_ai->complete([
            'engine' => 'davinci',
            'prompt' => 'Hello',
            'temperature' => 0.9,
            'max_tokens' => 150,
            'frequency_penalty' => 0,
            'presence_penalty' => 0.6,
        ]);

        $data = json_decode($complete, true);

        dd($engines);
    }
}

$open_ai returns a collection with engine, headers and contentTypes

$engines returns false as well, it's not even reading the library.

$data returns null

The data was randomly merged

Describe the bug

When I've parsed the data and exported it to the front end, it should look something like this:
{"code":200,"message":"OK","data":"\u662f"}

Sometimes, however, it concatenates multiple results, which causes the front end to fail to parse the data:
{"code":200,"message":"OK","data":"\u662f"}
{"code":200,"message":"OK","data":"\u7684"}

To Reproduce

What happens is very random, if I add random sleep to the callback function, or if I add more PHP_EOL, the probability of it happening will decrease

Code snippets

$this->chat($option, function ($curl_info, string $data) {
                if (! str_contains($data, 'data: [DONE]')) {
                    $result = ChatStreamResponse::parse($data);

                    if ($result) {
                        $this->buffer[] = $result->content;
                        /**
                         * json encode the response and echo it
                         * {"code":200,"message":"OK","data":"\u662f"}
                         */
                        echo AbstractResponse::sendOK($result->content)->content();
                        echo PHP_EOL;
                        ob_flush();
                        flush();
                    }
                }
                return strlen($data);
            });

OS

macOS

PHP version

php8.1

Library version

openai v4.7.1

Adding more capabilities

Hi there!

First of all, what a great package! Thank you for creating this.

I was wondering if you'd be open to any PR on the below areas:

  • Files
  • Fine tuning

I would like to extend the package so we could upload training files and ultimately fine-tune also.

Let me know :)

How to set header?

Describe the feature or improvement you're requesting

How to set header?

Additional context

No response

Where are the error codes from the OpenAI documentation ?

Describe the feature or improvement you're requesting

Hello,

I would like to know if it's currently possible to get the real OpenAI API error codes, or if this features will be added.

Currently when i call the chat function and get an error it's for example the following :

object(stdClass)#26338 (1) {
  ["error"]=>
  object(stdClass)#26339 (4) {
    ["message"]=>
    string(105) "Incorrect API key provided: 6. You can find your API key at https://platform.openai.com/account/api-keys."
    ["type"]=>
    string(21) "invalid_request_error"
    ["param"]=>
    NULL
    ["code"]=>
    string(15) "invalid_api_key"
  }
}

The obtained code is "invalid_api_key", wich isn't the code documented on OpenAI website (401)(https://platform.openai.com/docs/guides/error-codes) and isn't documented anywhere else.

It seem that the code should normally be at the root of the response, but it doesn't seem to be the case.

I thank you in advance for your reply.

Additional context

No response

HTML Format

Describe the feature or improvement you're requesting

Hello, I want to receive the incoming response in HTML format. Is this possible? thank you

Additional context

No response

Maximum execution time of 60 seconds exceeded on File Upload

Hello again,

I'm trying the file upload feature but every single time it times out

Maximum execution time of 60 seconds exceeded

The file is less than 1kb

This is my controller

        $file = $request->file('file');
        Storage::putFileAs('files', $file, 'sample.jsonl');
        $c_file = curl_file_create(\URL::to('storage/files/sample.jsonl'));
        echo "[";
        echo $open_ai->uploadFile(
            [
                "purpose" => "answers",
                "file" => $c_file,
            ]
        );
        echo ",";
        echo $open_ai->listFiles();
        echo "]";

And how do I interact with the file once it's uploaded? Like how do I give the AI commands based on my file contents?

No way to list fine-tuned models

Hi!
I'm working with fine-tuned models, and need to get a list of all available. However, the engines method only returns the base models, and does not include the user models as well.
Could we have a new models() method to return all available models?

$openai->completion (singular) works where opanai documentation specifies https://api.openai.com/v1/completions (plural)

Describe the bug

The README places links to OpenAI's reference documentation above the fold (at the very beginning), creating the impression that the library's methods have the same name as the path of the OpenAI endpoints. For example, it may be common for new users of the library to assume Completions are accessed by a method names "completions" such as:

$openai->completion() (spelled singular)

since the linked OpenAI reference documentation for "completions" endpoint describes endpoint as

POST https://api.openai.com/v1/completions (spelled plural)

Yet, the library's method is completion (spelled singular)

$openai->completion() (spelled singular)

This is made clear further down the README, but it's possible many new users will get sidetracked following links to OpenAI's reference and trying to copy the endpoint paths as library method names. Suggest several possible fixes:

  1. At the top of the Readme, calling out the fact method names are described further down the README.

  2. Create a prominent table mapping the OpenAI Reference endpoints to the library's methods.

Thnx

To Reproduce

$openai->completion()

Code snippets

No response

OS

Linux

PHP version

PHP 7.4

Library version

openai v3

Variable $txt does not include initial parts of text

Describe the bug

I'm having an issue where the variable $txt is not including the initial parts of the text. Instead of "Hello world", it only contains "lo world". This is the code that I'm using:

header('Content-type: text/event-stream');
header('Cache-Control: no-cache');
$txt = "";
$complete = $open_ai->chat($opts, function ($curl_info, $data) use (&$role, &$txt, &$contentId) {
    if ($obj = json_decode($data) and $obj->error->message != "") {
        error_log(json_encode($obj->error->message));
    } else {
        echo $data;
        $clean = str_replace("data: ", "", $data);
        $arr = json_decode($clean, true);

        if ($data != "data: [DONE]\n\n" and isset($arr["choices"][0]["delta"]["content"])) {
            $txt .= $arr["choices"][0]["delta"]["content"];
        }
    }

    echo PHP_EOL;
    ob_flush();
    flush();
    return strlen($data);
});

I'm not sure why $txt is not including the initial parts of the text. Can someone help me with this issue?

To Reproduce

  1. Execute the code provided in the question.
    
  2. Ensure that the $open_ai object is properly instantiated and configured with the appropriate options.
    
  3. Trigger the function that calls the $open_ai->chat() method.
    

    Check the output and observe that the variable $txt does not include the initial parts of the text.

Code snippets

No response

OS

Ubuntu

PHP version

7.4.21

Library version

openai 4.7.1

Request batching

Describe the feature or improvement you're requesting

From documentation it seems there is no option for batching requests to avoid hitting limit rate?

OpenAI refers to request batching and has given code example in python (https://platform.openai.com/docs/guides/rate-limits/error-mitigation). Would be useful to do anything similar with this package.

When I try to include several messages in a single request it responds with a single “choice” that uses all the messages as context rather than one response per message.

The OpenAI example below is specifically for the completion endpoint rather than the chat endpoint. However chat endpoint is probably way more desirable for request batching, due to current limit of 3 RPM and 1/10 of completion endpoint price.

UPDATE: After doing bit of research there doesn't seem to be viable options to do batching for ChatCompletion endpoints. At least that's what I've read on the OpenAi forums. There are some workarounds though. Here are links to the posts:

Additional context

OpenAI official batching example for completion endpoint using Python:

`import openai # for making OpenAI API requests

num_stories = 10
prompts = ["Once upon a time,"] * num_stories

batched example, with 10 story completions per request

response = openai.Completion.create(
model="curie",
prompt=prompts,
max_tokens=20,
)

match completions to prompts by index

stories = [""] * len(prompts)
for choice in response.choices:
stories[choice.index] = prompts[choice.index] + choice.text

print stories

for story in stories:
print(story)`

When I use stream, I sometimes encounter such a problem that the response content starts from a certain sentence and repeats continuously

Describe the bug

When I use stream, I sometimes encounter such a problem that the response content starts from a certain sentence and repeats continuously

To Reproduce

1.The user enters a question to ask
2.use eventSource
3.php code see below

Code snippets

$opts = [
            'model' => 'text-davinci-003',
            'prompt' => '你看过《满江红》吗?',
            'temperature' => 0,
            'max_tokens' => 2000,
            'stream' => true,
        ];

        header('Content-type: text/event-stream');
        header('Cache-Control: no-cache');

        $open_ai->completion($opts, function ($curl_info, $data) {
            echo $data . "\n\n";
            echo PHP_EOL;
            ob_flush();
            flush();
            info($data);
            return strlen($data);
        });


前端使用eventSource

OS

linux

PHP version

php 7.4

Library version

openai v3.0.1

Fine-tune pricing as in Python CLI

Describe the feature or improvement you're requesting

When reading the docs and YT videos on fine-tuning your model, you see the Python CLI displays the pricing of your model you're about to Fine-tune.

Would this be possible with this package as well? So before actually starting the fine-tune, see an estimate of what it is going to cost you, and thén continue with the actual FT, or not.

Additional context

No response

Question: Does anybody know a good token count package for PHP?

Describe the feature or improvement you're requesting

At thispoint we just divide the number of words by .75 to get the estimate, but a "real" script like tiktoken e.g. would be helpful. I did not come across one anywhere, does one exist?

If there is one, we're looking forward to using it and perhaps it will help your package as well?

Additional context

It would help give an estimate of tokens to our end users so they know what's about to get charged by OpenAI.

Chunk

Describe the feature or improvement you're requesting

Hello,
How to disable chunk?

Additional context

No response

Open AI Ada text completion returns bad results in code but works in playground

Describe the bug

I'm trying to use the Ada language processor of OpenAi to summarize a piece of text. When I try to use their playground, the function works and I get a summarization that makes sense and can be used by humans.
This is the cURL from the playground:

curl https://api.openai.com/v1/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "text-ada-001", "prompt": "Please write a one paragraph professional synopsis:\n\nSome text", "temperature": 0, "max_tokens": 60, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0 }'

This is the code that I use in PHP:

`$open_ai_key = 'xxx';
$open_ai = new OpenAi($open_ai_key);

    $complete = $open_ai->completion([
        'model' => 'text-ada-001',
        'prompt' => 'Please write a one paragraph professional synopsis: ' . $text,
        'temperature' => 0,
        'max_tokens' => 60,
        'frequency_penalty' => 0,
        'presence_penalty' => 0,
    ]);

    return $complete;`

I have also tried to use a both ada and davinci and in both cases it returns nonsense. I'm saying nonsense, because the text that it returns is not something that can be read and said, 'Hey, this is a proffesional synopsis'. I'm going to give an example of a sentence that I got in one of the iterations:

'It's not pretty and no I thought to myself, oh look IT'S NOT THAT REPUBLICAN kids would kno one of these things. OH IT'S A RESTRICTIOUS SCHOOL'.

I can assure you, there are no mentions of republicans or kids in the text that I'm processing.

My question is, am I doing something wrong? Does OpenAi work differently on their playground and in code?

To Reproduce

  1. Write a new PHP function that will try to create a proffesional synopsis using the ada model.
  2. Paste in a piece of text and make the request
  3. Compare the results in the OpenAI playground and in code and see that they are wildly different.

Example text that I'm using:

Alright then. Uh So welcome to another episode of E. M. S. On the Mountain. This one from the mountain. Oh snap. This is the show for those interested in Austrian rulers medicine. I'm Sean as always joined by my backcountry partner mike. And today's show we're gonna talk about one of our cases we had, I don't know, probably a couple of years ago now, while back. Yeah, so this was mike's so he's going to lead us through this party. So I'm guessing because I'm so bad at the editing and the getting things online at a reasonable time that this will be the first case review that we put up on the interwebs. But this is a new thing we're trying out. We're uh we're gonna do a few shows from the mountain. So normally I sit in my basement and Sean sits in his basement. We stare at each other across the magic of the interwebs. But today we are literally sitting across the table from each other in a old old building in the middle of the woods, attempting to record something that's lively and exciting for you folks. So let's see how this goes. So, so if you hear background noise like no kidding animals, birds and random people, radios, people, doors, it's all part of the part of the party and it could be a train wreck, but I have a feeling that at least one person and only one person johnny b will give us his feedback on what he thinks about this and maybe our wives will care. But yeah, all right, so let's talk about this. We had a case a couple of years ago that started out as a normal, it's a normal saturday afternoon in the woods. And uh, the tones drop. No, the tones did not drop. I apologize. There was no tone drop it. There was a, there was a call on the radio from a uh, from a report of it went out as a person with a little trouble breathing. That was an understatement in a certain respect. And it was suggested that perhaps we go kind of check it out, see what's going on. So we jump in our magic buggy and bop on over, luckily this patient wasn't too too far away from where we were at the time. We head down trail we found said person, they were 100 yards off of a fire road. And that fire road was maybe 5200 yards from a paved road. So they weren't real deep in the woods. But they were definitely hiking. They were definitely on a trail. They were definitely about £400. Yes, Yes. They were just want to make sure we paint that map properly. So we're, we're about 100 50 yards down a rocky trail. Maybe 200. I said I'd go between two and 300. Okay, a little bit further. That trail led to eventually an intersection with a fire road and that fire road about 100 yards or so from hard top if you will. So here we are with a £400 individual who's breathing at a rate of, I don't know, 800 or so. It's more like it was, it was more like between 35 and 40. They were uh yeah, this patient was panting like a champion. Now let's back up because I love storytelling and I like to do it and completely jacked up time order because I didn't make notes prior to proceeding down trail. After we stepped out of our trusty steed, we were greeted by the patient's family member and this family member immediately informed us that they were within two or three minutes. They informed us that their family member was just having a quote sugar problem because they had the diabetes and that she was in fact a paramedic at one point but did not want to keep that up because they were going to make her do si es She didn't know she had to do si es tu research so she lost it real quick. Like but she definitely was a paramedic and she was definitely convinced that her family member was experiencing a sugar problem. Yeah. So I'm not so convinced that she was ever a paramedic. I'm just going by what I was dog hey, it is what it is. Yeah. So we also learned as we proceeded down to link up with said patient that a number of bystanders. Another a number. Excuse me not bystanders. A number of other hikers that had come by had provided various snacks and sugar free products because it was believed that this individual is experiencing a hypoglycemic event and needed more sugar. So upon arrival on scene, by the way, just a little note here, we did bring a cardiac monitor with us because it was only 100 yards or so. It was a report of a person having trouble breathing. The one and only time mike and I have ever taken a life pack down a trail. There was one other time. We'll talk about that one. That's when I carried it all the way down. Dark hollow for the girl with the weird chest thing. A 30 year old whatever. And I decided I was never doing that again. I'll have to cut out the dark hollow part because that's kind of Yeah, whatever carried it down that trail. So upon arrival, one of the first things a good paramedics gonna do is assess the scene. And then since it was reported that this person is having a sugar problem, we assess the state of her sugar needs keep in mind breathing at 35 to 40. Diaphragmatic. Sitting on a log, leaning against another nice bystander. Yeah. Trying to hold her upright so we get a quick finger prick. I don't remember what her blood pressure was off the top of my head. It wasn't astronomical. No, it was maybe upper one hundred's very low 200. Remember being better than mine. It wasn't. Yeah, nothing that made you go, Oh, she's good at, I mean she's breathing at like 40 And she's diaphragmatic. She's been out hiking. And did I mention that she's not a shrinking flower? So we grab the sugar and I will never forget the sugar number 768. And I thought to myself, uh, probably doesn't need any more sugar. So we do grab a quick 12 lead Until

Code snippets

No response

OS

Linux

PHP version

PHP 7.4

Library version

openai v3.4

Bad gateway/cf_bad_gateway/null on long token request

Describe the bug

I'm working on a school project to test the differences in content created by GPT-3 vs. 4, but to get the quantity for testing, I need to run a script that creates the content for me and saves it to a JSON file. But it doesn't seem like GPT-4 is handling big requests very well.

When I try to get a request from OpenAI with GPT-4, it seems to timeout even when I've increased the timeout 600 seconds.
I get "null" on the content, and oftentimes I also get "null" on the error message.
The few times I was able to produce an error message, I got "502 cf_bad_gateway".

To Reproduce

The title and the outline are generated by OpenAI with GPT-4 before this final request is made.

I run the below code with the following parameters:
keyword = expensive champagne
title = The Ultimate Guide to the Top 10 Most Expensive Champagnes: Is It Worth the Splurge?
outline = 1. Introduction to Expensive Champagne
1.1 - The allure of luxury champagne
1.2 - The role of expensive champagne in celebrations and special events

  1. Factors Contributing to the High Price of Champagne
    2.1 - Production process and limited availability
    2.2 - Prestigious brand reputation
    2.3 - Aging and maturation

  2. Top Expensive Champagne Brands
    3.1 - Dom Pérignon
    3.2 - Krug
    3.3 - Louis Roederer Cristal
    3.4 - Armand de Brignac
    3.5 - Moët & Chandon

  3. The Taste of Luxury: What Sets Expensive Champagne Apart
    4.1 - Flavor profile and complexity
    4.2 - Fine bubbles and mouthfeel
    4.3 - Pairing expensive champagne with food

  4. The Role of Expensive Champagne in Popular Culture
    5.1 - Iconic moments in movies and television
    5.2 - Celebrity endorsements and collaborations
    5.3 - Expensive champagne in music and lyrics

  5. Investing in Expensive Champagne
    6.1 - Collecting and storing champagne
    6.2 - The potential for appreciation in value
    6.3 - Risks and rewards of investing in champagne

  6. The Experience: Sipping on Expensive Champagne
    7.1 - Champagne etiquette and rituals
    7.2 - Best glassware for enjoying luxury champagne
    7.3 - Creating memorable moments with expensive champagne

  7. Conclusion: The Enduring Appeal of Expensive Champagne
    8.1 - The timeless nature of luxury
    8.2 - The joy of indulging in life's finer pleasures

Code snippets

function get_article_content($open_ai, $article_outline, $article_title, $keyword) {
    $model = 'gpt-4';
    $retry = false;
    $content_chat_decoded = null;
    $error_message = null;

    do {
        $open_ai->setTimeout(300);
        $content_chat = $open_ai->chat([
            'model' => $model,
            'messages' => [
                [
                    "role" => "system",
                    "content" => "You're AI article writer and SEO expert, you write the most amazing content.
            Each article you write should contain the following:
            - The article should be at least 1500 words, ideally 2000 words in total.
            - Start the article with an introduction. The introduction should not have a heading."
                ],
                [
                    "role" => "user",
                    "content" => "{$article_outline}
            Title: {$article_title}
            SEO keyword: {$keyword}
            Write the article:"
                ],
            ],
            'temperature' => 0.7,
            'max_tokens' => 4000,
            'frequency_penalty' => 0,
            'presence_penalty' => 0,
        ]);

        $content_chat_decoded_raw = json_decode($content_chat);
        $content_chat_decoded = $content_chat_decoded_raw->choices[0]->message->content;

        // Check for any error in the response
        if (isset($content_chat_decoded_raw->error)) {
            $retry = true;
            $model = 'gpt-4-0314';

            // Store the error message
            $error_message = $content_chat_decoded_raw->error->message;
        } else {
            $retry = false;
        }

    } while ($retry);

    return [$content_chat_decoded, $error_message];
}

// 3. Write out the article, keeping in mind to SEO optimize for the keyword
list($content_chat_decoded, $error_message) = get_article_content($open_ai, $article_outline, $article_title, $keyword);
$article_content = $content_chat_decoded;

OS

Ubuntu 20

PHP version

PHP 7.1.1-1

Library version

openai v3.0.1

Chat API continuous thread

Describe the bug

When script send next request to chat API, it returns different response, how can we continue same thread?

To Reproduce

Continue thread

Code snippets

No response

OS

MAC

PHP version

PHP latest

Library version

latest

Built in Tokenizer

Describe the feature or improvement you're requesting

It would be great if you or someone else implemented a tokenizer for the different models or one that allowed you to pass in a parameter like 'cl100k_base'.

Means we don't need to balance different libraries that specialise in specific tokenisers.

Additional context

No response

Disabling SSL checking in cURL can solve getting "false" as a response

Describe the bug

This is not a bug per se, but I thought I would share it with other users and the dev team in case it helps.

In another thread, users were reporting that they were getting nothing or "false" back when they were sending a simple Completion like this:

    $open_ai = new OpenAi($open_ai_key);

    $response = $open_ai->completion([
        'model' => 'davinci',
        'prompt' => 'Hello',
        'temperature' => 0.9,
        'max_tokens' => 150,
        'frequency_penalty' => 0,
        'presence_penalty' => 0.6,
    ]);

The issue is now closed, but I found that if you are testing on a local host and are using something like Laragon (which is similar to XAMPP) to host your app or website, the certificates issued are very generic and can cause cURL to fail if it is set to verify the certificate,

To resolve it, you can go to the OpenAi.php file in the vendor folder and add
CURLOPT_SSL_VERIFYPEER => false,
to the sendRequest function like this:

    $curl_info = [
        CURLOPT_URL => $url,
        CURLOPT_RETURNTRANSFER => true,
        ...
        CURLOPT_HTTPHEADER => $this->headers,
        CURLOPT_SSL_VERIFYPEER => false,
    ];

and it no longer tries to verify the SSL certificate. It resolved my issue and so it may help others who encounter a similar issue.

You may want to consider adding it to the next release, but I am unsure of what the implications may be from a SSL point of view, if any? Maybe you could add it as an optional $opts parameter for those who are testing on localhosts which they could then remove when in production.

To Reproduce

Nothing to reproduce

Code snippets

No response

OS

Windows

PHP version

PHP 8.1.3

Library version

v3.3

Request Error:Failed to connect to api.openai.com port 443 after 21066 ms: Timed out

Describe the bug

When I use a third-party Python library for OpenAI, it can return results normally. However, when I run PHP, it returns false. When I checked with curl_errno(), I found the error mentioned above. Both situations are under the same network conditions, and I have ruled out the issue of SSL certificate expiration. What could be the problem?

To Reproduce

image

Code snippets

No response

OS

Windows

PHP version

php 8.2

Library version

4.7

No output is coming from openai libaray

Describe the bug

I have write this code but it giving no output and returning bool(false) on var_dump($result);

To Reproduce

completion([ 'model' => 'text-davinci-003', 'prompt' => 'Make me a paragraph that summarizes this text for a second-grade student', 'temperature' => 0.7, 'max_tokens' => 64, "top_p" => 1.0, 'frequency_penalty' => 0.0, 'presence_penalty' => 0.0, ]); echo $result; // var_dump($result); ?>

Code snippets

<?php

    require __DIR__ . '/vendor/autoload.php';

    use Orhanerday\OpenAi\OpenAi;

    $open_ai_key = getenv('OPENAI_API_KEY');
    $open_ai = new OpenAi($open_ai_key);
    
    $result = $open_ai->completion([
        'model' => 'text-davinci-003',
        'prompt' => 'Make me a paragraph that summarizes this text for a second-grade student',
        'temperature' => 0.7,
        'max_tokens' => 64,
        "top_p" => 1.0,
        'frequency_penalty' => 0.0,
        'presence_penalty' => 0.0,
    ]);

    echo $result;
    // var_dump($result);
?>

OS

Windows

PHP version

PHP 8.1.0

Library version

openai v.3.0.1

The event-Stream feature cannot catch any error.

Describe the feature or improvement you're requesting

I see completion with stream can not catch error like "Wrong API".
The completion is still working fine with stream and correct API.

Additional context

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.