openai / openai-node Goto Github PK
View Code? Open in Web Editor NEWThe official Node.js / Typescript library for the OpenAI API
Home Page: https://www.npmjs.com/package/openai
License: Apache License 2.0
The official Node.js / Typescript library for the OpenAI API
Home Page: https://www.npmjs.com/package/openai
License: Apache License 2.0
Right now, migrating to Typescript or creating a d.ts file manually are few of the options.
If the API is going to be minimal in the long term, then simple index.d.ts created manually should be enough.
No response
If I try calling the OpenAI API with openai.createImage and my request is malformed, I just get a generic "request failed with status code 400". If I make the request with cURL instead, I can see the reason my request failed (invalid_request_error, rejected due to safety system).
Possible to surface these errors to the client?
No response
macOS
16
3.1.0
I am trying to get the time complexity for some source code, and the response always comes back null. The call to OpenAI works, however.
I am doing this through a Firebase Callable Cloud function, and when I log the response, this is an example of what I typically get:
completion.data.choices: [{"text":"","index":0,"logprobs":null,"finish_reason":"stop"}]
Any idea what's happening here?
const { Configuration, OpenAIApi } = require("openai");
const key = 'xxxxxxxxxxxx';
const configuration = new Configuration({ apiKey: key });
const openai = new OpenAIApi(configuration);
exports.getTimeComplexity = functions.https.onCall(async (data, context) => {
const selection = data.selection;
if (selection.length === 0) {
throw new functions.https.HttpsError('invalid-argument', '[getTimeComplexity] Selection must be > 0 characters long');
}
if (!context.auth) {
throw new functions.https.HttpsError('failed-precondition', '[getTimeComplexity] The function must be called while authenticated.');
}
// Get time complexity
openai.createCompletion({
model: "text-davinci-003",
prompt: selection,
temperature: 0,
max_tokens: 64,
top_p: 1.0,
frequency_penalty: 0.0,
presence_penalty: 0.0,
stop: ["\n"],
}).then((completion) => {
const timeComplexity = completion.data.choices[0].text;
console.log(`[getTimeComplexity] Time Complexity ✅: ${timeComplexity}`);
return { 'success': true, 'complexity': timeComplexity };
});
});
macOS v12.6
Node v16
openai v3.1.0
The ability to copy was something I found lacking at first, but as I played around a bit more I remembered that this thing can do basically anything, so I told it to put the text it generated into a copy-abel format. Usually, this works and if not I can say put it into code format. It would however be more convenient to have a copy button right there at the top right corner, below the prompt you entered; this way you would not have the same thing twice if you needed, for example, a record of notes. It would also mean fewer things to do for the ai as you would not need to request different formats to do one thing. I am aware I could highlight and copy manually, but sometimes text can get lengthy when fleshing out ideas.
This is mostly used for lengthy text generation, and because for the time being you occasionally have to tell the ai to continue what it was typing, it would greatly increase efficiency when moving stuff into a notes folder on EverNote. This is not something that is totally undoable, but adding this feature which the code format already has would make things a bit more complete and natural for creative and productive work.
Currently createCompletion()
function requires an engine id as first parameter. When I pass the model id instead i get 404 error which makes sense if is waiting an engine.
Is it posible to use a fine tune model?
Use the config below, get the unrelated content in the choices array. And put same prompt on Open AI Play ground, it returns right content ("The tl;dr version of this would be to simply say that the article is about the importance of choosing the right words when communicating, and that the wrong words can easily lead to misunderstanding.").
if I changed prompt: 'Tl;dr, summarize in one paragraph without bullet:\n in one paragraph without bullet.\n', it works fine. Try other content works fine. Just like there's some cache. However I tried to run from AWS lambda and local, both same wrong result.
config: {
model: 'text-davinci-002',
prompt: 'Tl;dr, summarize in one paragraph without bullet:\nsummarize in one paragraph without bullet.\n',
temperature: 0.5,
max_tokens: 320,
best_of: 1,
frequency_penalty: 0,
presence_penalty: 0,
logprobs: 0,
top_p: 1
},
choices: [
{
text: '\nThe article discusses the pros and cons of taking a gap year, or a year off between high school and college. The pros include gaining life experience, taking time to figure out what you want to study, and having the opportunity to travel. The cons include falling behind your peers academically, feeling out of place when you return to school, and struggling to find a job after graduation. Ultimately, the decision to take a gap year is a personal one and depends on what you hope to gain from the experience.',
index: 0,
logprobs: [Object],
finish_reason: 'stop'
}
],
Simple use the same config above. It keeps happening to me, always same result.
No response
Windows
Node v16
v3.0.1
The typings were updated such that the signature is createFile(file: File)
, but the docs example shows a ReadStream
being provided.
File
is not available in Node. What is meant to be done here? Is this a typo, should be File | ReadStream
?
Try to pass a ReadStream
to createFile()
, see type error.
No response
N/A
latest
latest
OpenAI has some great embedding examples in the OpenAI Cookbook & API docs for Python, but none for Nodejs.
Would be awesome if you could add some for different use cases like clustering, regression, anomaly detection, visualization, search, context relevance, information retrieval
Thanks 🚀
I was using the openai.createClassification
method and started getting 400 BAD REQUEST when I introduced the input string (attached to the bottom) as one of the examples
for labeling.
I believe there is some sanitizing that fails in some area.
href=\"https://
becomes href=\\"https://
which is not valid JSON for payload.
Here is the JSON which seems to be entirely valid
Here is the raw request which was created by the method and seems to be invalid JSON.
Error: Parse error on line 11:
...rom <a href=\\" https: //github.com/
----------------------^
Expecting 'EOF', '}', ':', ',', ']', got 'undefined'
Here is my code
All of your examples use await, however you did not set any of your functions as async (at least in version 3.0.0).
so instead of :
const val = await blahblah()
you have to do :
const val = blahblah();
val.then((data)=>{
console.log(data);
});
is this intentional?
I was using open chat to make a few different calendars for this year, in the process I had multiple tabs of a new chat open, this way I was able to have one for the entire year, and one for the day-to-day routine, when I got to make the routine, however, I decided to toggle the screen into dark mode as it has been getting late, then upon doing so, the screen rapidly started flashing between the two as i forgot to close the other tabs, this happened for each tab. I closed the first 3 and the problem still seems to persist.
No response
chrome
December 15th version
v3.0.1 is what was filled in as a example of this text box, but I could not currently find it.
Using the function createImageVariation
with a non square image results in the following error:
(node:58341) UnhandledPromiseRejectionWarning: Error: Request failed with status code 400
at createError (/Users/kenny.lindahl/Dev/test/open-ai-gpt/node_modules/axios/lib/core/createError.js:16:15)
at settle (/Users/kenny.lindahl/Dev/test/open-ai-gpt/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/Users/kenny.lindahl/Dev/test/open-ai-gpt/node_modules/axios/lib/adapters/http.js:322:11)
at IncomingMessage.emit (events.js:387:35)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
Option 1:
The client should know that the image is not square and throw an error and not call the API.
Option 2:
Alternatively it should change the image size (add margin, not stretch the image content) so it can be sent to the API with a successful response.
Complete node program that reproduces the issue:
const { Configuration, OpenAIApi } = require("openai");
const fs = require("fs");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
(async () => {
const response = await openai.createImageVariation(
fs.createReadStream(__dirname + "/images/non-square-image.png"),
2,
"1024x1024"
);
console.log("-------------------");
console.log(response);
})();
No response
macOS Monterey: 12.5.1 (21G83)
v14.17.3
3.1.0
HI i use VPN to access chatGPT why you block some countrys ???
No response
test
Hi team, I'm using openai pkg with model text-davinci-003
in my code.
I tried to use createCompletion
to get a response back, and all work well if I just make prompt length blew 2000.
But if I use a prompt which more than 2000 tokens, it will return 400 error.
I can see the document told us it could accept 4000 tokens.
https://beta.openai.com/docs/models/gpt-3
SO is it a bug will be fixed in the future?
Here is the code with params:
await openAIAgent.createCompletion({
model: "text-davinci-003",
prompt: prompt,
temperature: 0.3,
max_tokens: 2048,
top_p: 1.0,
frequency_penalty: 0.8,
presence_penalty: 0.0,
})
Use createCompletion, the prompt
should more than 2000 tokens, maybe directly use a prompt
between 3000 and 4000
The params same as follow code:
await openAIAgent.createCompletion({
model: "text-davinci-003",
prompt: prompt,
temperature: 0.3,
max_tokens: 2048,
top_p: 1.0,
frequency_penalty: 0.8,
presence_penalty: 0.0,
})
No response
macOS
Node v18.12.0
openai v3.1.0
Currently it uses axios ^0.26.0 while we are at axios 1.2.1
It's easy to mitigate, but feels really wrong to use such an old version which has a totally different type interface.
Simply install the openai package and try and pass a current version of axios into the openai instance constructor
No response
osx
node 19
openai 3.1.0
I am trying to use the client inside a Cloudflare Worker and I get an error as follows:
TypeError: adapter is not a function
at dispatchRequest (index.js:35781:14)
at Axios.request (index.js:36049:19)
at Function.wrap [as request] (index.js:34878:20)
Seems to be a common problem as the way Axios checks for XHR breaks in CF workers which is a reduced node environment:
https://community.cloudflare.com/t/typeerror-e-adapter-s-adapter-is-not-a-function/166469/2
Recommendation is to use fetch instead.
Try to use API in a cloudflare worker
No response
Windows 10
Node v16
openai v3.1.0
It would be nice for the ability to ask open ai what new features it has.
It can already tell you what the ai its self is capable of, but it seems oblivious to any new changes in updates. It would be nice if upon request it said new bug fixes, features added, and current version model. This would help to know if any issues or quality of life improvements have changed without having to play around to see what works. I know this information is probably already listed somewhere, but It would be nice for even just a link to the information upon the request to the ai.
Description
According to the docs a response to a completion request should have a usage
property that allows to see how many tokens have been used for the request + response. Manually checking the response of openai.createCompletion
also shows that the usage
property exists in response.data
:
const response = await openai.createCompletion({
model: 'text-davinci-002',
prompt: `<someprompt>`,
});
console.log(response.data.usage)
However, the CreateCompletionResponse
type does not include usage
and thus Typescript is throwing an error when trying to access usage
in an openai completion response.
Expected Behavior
openai
should have a type definition for usage
in CreateCompletionResponse
that allows to see & access the used tokens in a request.
Getting this error when trying to run the following code:
Refused to set unsafe header "User-Agent"
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const response = await openai.createCompletion("code-davinci-001", {
prompt: filePart,
temperature: 0.1,
max_tokens: 2000,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0.5,
});
openai.createCompletion({}) throws an error with message "Request failed with status code 400" with the following call:
const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
Where
p = "Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto"
max_tokens = 4000
temperature = 0.0
My configuration is configured correctly, as all calls with prompt < 478 characters works, but once I get past this character limit, it starts to fail every time.
const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
Error response given back to me:
`
{"message":"Request failed with status code 400","name":"Error","stack":"Error: Request failed with status code 400\n at createError (node_modules/axios/lib/core/createError.js:16:15)\n at settle (node_modules/axios/lib/core/settle.js:17:12)\n at IncomingMessage.handleStreamEnd (node_modules/axios/lib/adapters/http.js:322:11)\n at IncomingMessage.emit (node:events:539:35)\n at endReadableNT (node:internal/streams/readable:1345:12)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json","User-Agent":"OpenAI/NodeJS/3.1.0","Authorization":"Bearer sk-***","Content-Length":553},"method":"post","data":"{\"model\":\"text-davinci-003\",\"prompt\":\"Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto\",\"max_tokens\":4000,\"temperature\":0}","url":"https://api.openai.com/v1/completions"},"status":400}
`
The above was printed using JSON.stringify FYI
macos
node 16
3.1.0
OpenAi Safety best practices:
To help with monitoring for possible misuse, developers serving multiple end-users should pass an additional user parameter to OpenAI with each API call, in which user is a unique ID representing a particular end-user.
With the Python Client you can pass the additional "user" argument:
response = openai.Completion.create(
engine="davinci",
prompt="This is a test",
max_tokens=5,
user="1"
)
Is this also a feature in this node client?
I'm a bit lost as to how to actually use stream: true
in this library.
Example incorrect syntax:
const res = await openai.createCompletion({
model: "text-davinci-002",
prompt: "Say this is a test",
max_tokens: 6,
temperature: 0,
stream: true,
});
res.onmessage = (event) => {
console.log(event.data);
}
It would be amazingly helpful if when exceeding 5 saved conversations, having 6 and up would grant a search bar in the chat selection. This way if you need to pull text or ideas from previous chats, you can simply search not just titles, but keywords. For example, you might ask in a previous conversation about planning out a goal. Maybe after talking and problem-solving keeping your resolutions you ask for a book recommendation or source to learn more from. A week later you are on amazon and you think maybe you want that book the ai recommended. So now the previous chat which was titled "new years resolution goals" will come up when typing in new Year, book/books, or the title of the book such as atomic habits. To take this idea further you could ask to group conversations, maybe you have three priorities for the year beyond general goals and plans, one working out, the second one diet, and the third general goals/habits. You could then group these into one titled New Year. Perhaps the ai would do this finding similar context between threads but even a manual option would be nice.
In using the ai so much for general questions, having new ideas, or simply keeping a branching idea separate from the previous topic, my prior chat selection box has the need a lot of scrolling when pulling out prior topics. This would greatly help out when recalling a record of past conversations.
I have been using this chatbot to keep track of schedules for a yearly calendar and for keeping/organize notes. When making a calender or really anything that requires a decent amount of typing, the chatbot will stop mid-sentence. This is fixable when you say "continue" or "you stopped halfway" etc. but it is very tedious to keep typing this halfway through a length of text. This is especially annoying when you tell it to put it into a copyable format, and thus the stop results in having to copy multiple things, and editing a few lines so it is coherent where I paste the text into.organizing
No response
chrome
December 15th version
v3.0.1 is what was filled in as a example of this text box, but I could not currently find it.
According to the fine-tuning docs on OpenAi, there should be a createCompletionFromModel
function in your API:
const response = await openai.createCompletionFromModel({
model: FINE_TUNED_MODEL
prompt: YOUR_PROMPT,
});
There is even a post in the forums that says it was included in versions 2.0.2:
But I'm getting errors saying that it's not part of the import. Is that function deprecated? How do we create a completion using a fine-tuning model?
I forked and cloned the repo to search for createCompletionFromModel
to make sure I wasn't missing something, but it came up empty.
No response
mac)S
Node 16
latest
I'm trying to upload a file that can then be used to create a fine-tune. It's been passed through the CLI validator so I know it's correct, but I keep getting the following error from Axios:
data: {
error: {
message: 'The browser (or proxy) sent a request that this server could not understand.',
type: 'server_error',
param: null,
code: null
}
}
Here's how I'm trying to upload the file;
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
await openai.createFile(`${uploadFilename}.jsonl`, "fine-tune");
Am I doing this right? I can't seem to see what the problem could be.
I like to use the api to ask question, but you know the chatGPT will save the chat conversation, next time you ask a question in the chat, it will answer based on the chat conversations. But this api of the openai library using seems did not save the chat.
I dont know why it did not save that chat as chatGPT does, because its free account or something else?
I can pay for the api that can save chat and then response just like chatGPT did. I need help. Thanks.
Im using the example code from the playground:
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const response = await openai.createCompletion("code-davinci-002", {
prompt: "##### Translate this function from Python into Haskell\n### Python\n \n def predict_proba(X: Iterable[str]):\n return np.array([predict_one_probas(tweet) for tweet in X])\n \n### Haskell",
temperature: 0,
max_tokens: 54,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
stop: ["###"],
});
This returns a 404 error. Is codex not available as api?
Is it possible to use the insert functionality?
Hi openai,
I'm currently using completion API and am attempting to use the edit API now as well.
Code:
const result = await openAI.createCompletion('text-davinci-002', {
prompt: `${content}\n\nTl;dr`,
temperature: 0.7,
max_tokens: 60,
top_p: 1.0,
frequency_penalty: 0.0,
presence_penalty: 0.0,
})
const result = await openAI.createEdit('text-davinci-002', {
input: content,
instruction: 'Rewrite this more simply',
})
The first request has been working for months and still is, but the second returns this
(node:5013) UnhandledPromiseRejectionWarning: Error: Request failed with status code 404
at createError (/Users/zfoster/gravity/node_modules/openai/node_modules/axios/lib/core/createError.js:16:15)
at settle (/Users/zfoster/gravity/node_modules/openai/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/Users/zfoster/gravity/node_modules/openai/node_modules/axios/lib/adapters/http.js:322:11)
at IncomingMessage.emit (events.js:412:35)
at IncomingMessage.emit (domain.js:470:12)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
Let me know if I need to make any changes and if there's an existing example for doing something like a "simplification rewrite". Basically trying to summarize and rewrite the text in more simple words with these two operations.
Thanks!
Can't use the new moderation endpoint with this library and I'd rather use it via the library than making the request myself.
Maybe it exists and I am not finding how, I need to make multiple calls while maintaining history, for example:
1- What is the size of the earth?
2- And of the moon?
Does this functionality exist or would it be a feature?
No response
createModeration
is giving 400 for all models/input variations.
const openai = new OpenAIApi(
new Configuration({
apiKey: process.env.OPEN_AI_SECRET,
})
);
// gives 400 without clues
const moderation = await openai.createModeration({
model: "text-davinci-003",
input: "This is a very nice text",
});
macOs
v18.12.1
3.1.0
What happens:
Passing classification_betas
, e.g. classification_betas: [1, 0.5]
leads to equal f-beta values in the resulting CSV file, although precision and recall differ. The columns are named correctly in the resulting file, e.g. classification/f0.5
etc, but the cell values always equal f-1
, not the β
of the respective column.
What I expected:
Differing f-values, as the parameter weighs precision higher (1x, 2x, ... times as much) than recall.
Maybe in interpreted the docs wrongly though, and this parameter is supposed to be used differently.
I don't get back all the code from the api.
write me a function in javascript that makes 10 parallel fetch requests simultaneously for 100 iterations
the code is cut off.
const { Configuration, OpenAIApi } = require("openai");
const argv = require('minimist')(process.argv.slice(2));
console.log(argv.help);
const configuration = new Configuration({
apiKey: 'xxx',
});
const openai = new OpenAIApi(configuration);
(async () => {
const response = await openai.createCompletion({
model: "code-davinci-002",
prompt: `/* javascript: ${argv.help} */`,
temperature: 0,
max_tokens: 256,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
});
response.data.choices.map(c => console.log(c.text));
})();
$ node code.js --help "write me a function that makes 10 parallel fetch requests simultaneously for 100 iterations"
No response
Linux
19
3.0.1
Is it possible to create and train a model through the api or only train?
No response
Using the call
const response = await openai.listFineTunes();
To get the list of my fine tunes. Then from that list, I'm using the fine_tuned_model
field and passing that to:
await openai.deleteModel(model as string);
I receive a 404 error back that says;
{
error: {
message: 'That model does not exist',
type: 'invalid_request_error',
param: 'model',
code: null
}
}
The url looks like this:
https://api.openai.com/v1/models/curie%3Aft-personal-2022-05-02-16-11-13
Following the steps in the documentation here: https://beta.openai.com/docs/api-reference/fine-tunes/delete-model
Thanks!
ref: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
When running current version with axios on the Edge I got this error:
An error occurred during OpenAI request [TypeError: adapter is not a function]
Updated from v2.0.5 to 3.0.0 of the package and got 16 errors in node_modules/openai/dist/api.d.ts:
[{
"resource": "PROJECT_PATH/node_modules/openai/dist/api.d.ts",
"owner": "typescript",
"code": "2304",
"severity": 8,
"message": "Cannot find name 'File'.",
"source": "ts",
"startLineNumber": 1666,
"startColumn": 24,
"endLineNumber": 1666,
"endColumn": 24
}]
Simply npm install openai
on any typescript project.
"devDependencies": {
"@types/glob": "^7.2.0",
"@types/mocha": "^9.1.1",
"@types/node": "14.x",
"@types/vscode": "^1.67.0",
"@typescript-eslint/eslint-plugin": "^5.21.0",
"@typescript-eslint/parser": "^5.21.0",
"@vscode/test-electron": "^2.1.3",
"eslint": "^8.14.0",
"glob": "^8.0.1",
"mocha": "^9.2.2",
"typescript": "^4.6.4"
},
"dependencies": {
"openai": "^3.0.0"
}
macOS
Node v16.13.1
3.0.0
all models except davinci 2 + not working...
I get axios errors when trying to use models such as ada / babbage / etc EXCEPT davinci 2+ -> these models are not working; throws error.
Fetch using redux.
Error snip:
response: {
status: 404,
statusText: 'Not Found',
headers: {
date: 'Fri, 02 Dec 2022 00:49:31 GMT',
'content-type': 'application/json; charset=utf-8',
'content-length': '158',
connection: 'close',
vary: 'Origin',
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: 'post',
import { OpenAIApi, Configuration } from 'openai';
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
import getIP from '../../../utils/get-ip';
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const redis = new Redis({
url: process.env.UPSTASH_REST_API_DOMAIN,
token: process.env.UPSTASH_REST_API_TOKEN,
});
const ratelimit = new Ratelimit({
redis: redis,
limiter: Ratelimit.slidingWindow(3, '1 d'),
});
export default async function response(req, res) {
const ip = getIP(req);
const result = await ratelimit.limit(ip);
res.setHeader('X-RateLimit-Limit', result.limit);
res.setHeader('X-RateLimit-Remaining', result.remaining);
if (req.method !== 'POST') {
res.status(405).json({ error: 'Method not allowed' });
return;
}
if (!req.body.projectId) {
res.status(400).json({ error: 'Missing projectId' });
return;
}
if (!result.success) {
res.status(429).json({
error: 'You have reached your daily limit of 3 free completions. Try again tomorrow or upgrade your plan in account settings to continue using services regularly.',
});
return;
}
const completion = await openai.createCompletion({
model: 'text-davinci-002',
prompt: req.body.prompt,
temperature: 0.6,
max_tokens: 2000,
presence_penalty: 0.5,
// frequency_penalty: 0.5,
});
try {
const completeModeration = await openai.createModeration({
input: completion.data.choices[0].text,
model: 'text-moderation-latest',
});
const moderationRes = completeModeration.data.results[0].flagged;
if (moderationRes === false) {
res.status(200).json({ response: completion.data.choices[0].text });
} else {
res.status(500).json({
error: 'Sorry. The output has been flagged for inappropriate content. Please try again.',
});
}
} catch (error) {
res.status(500).json({ error: error.message });
}
}
mac
v18.12.0
3.1.0
im using the nodejs example from the docs. Inserted my API key.
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
apiKey: "MY-KEY",
});
async function getCompletion () {
const openai = new OpenAIApi(configuration);
const response = await openai.createCompletion("text-curie-001", {
prompt: "Say this is a test",
max_tokens: 5
})
.catch(err => {
console.log(err);
});
console.log(response);
}
getCompletion();
returns:
response: {
status: 429,
statusText: 'Too Many Requests',
headers: {
date: 'Thu, 28 Apr 2022 12:51:34 GMT',
'content-type': 'application/json; charset=utf-8',
'content-length': '205',
connection: 'close',
vary: 'Origin',
'x-request-id': 'xxxx',
'strict-transport-security': 'max-age=15724800; includeSubDomains'
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: 'post',
data: '{"prompt":"Say this is a test","max_tokens":5}',
url: 'https://api.openai.com/v1/engines/text-curie-001/completions'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype],
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
_contentLength: null,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
socket: [TLSSocket],
_header: 'POST /v1/engines/text-curie-001/completions HTTP/1.1\r\n' +
'Accept: application/json, text/plain, */*\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/2.0.5\r\n' +
'Authorization: Bearer MY-KEY\n' +
'Content-Length: 46\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: noopPendingOutput],
agent: [Agent],
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
path: '/v1/engines/text-curie-001/completions',
_ended: true,
res: [IncomingMessage],
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: [Writable],
[Symbol(kCapture)]: false,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype]
},
data: { error: [Object] }
},
isAxiosError: true,
toJSON: [Function: toJSON]
}
The request returns a 429 Status. I have zero usage on my account and i only make a single request each time.
I tried all engines.
Is this a known problem? Since this problem has little reltation to this node repo i will close this issue as soon as possible.
Update the API configuration to support Azure openai endpoints as well.
In order to use the Python OpenAI library with Microsoft Azure endpoints, we need to set the api_type, api_base and api_version in addition to the api_key. The api_type must be set to 'azure' and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the engine parameter.
import openai
openai.api_type = "azure"
openai.api_key = "..."
openai.api_base = "https://example-endpoint.openai.azure.com"
openai.api_version = "2022-12-01"
completion = openai.Completion.create(engine="deployment-name", prompt="Hello world")
print(completion.choices[0].text)
No response
#Working Below Code.
let readStream = fs.createReadStream("image.png");
response = await openai.createImageVariation(readStream, 1, "1024x1024");
#Not Working Below Code.
let readStream = https.get("https://storage.googleapis.com/inceptivestudio/1672042338704.png", (stream) => {
return stream;
});
response = await openai.createImageVariation(readStream, 1, "1024x1024");
No response
As part of a 'Pre-launch Review' we've been instructed to provide a user id as part of our completion requests:
Pass a uniqueID for every user w/ each API call (both for Completion & the Content Filter) e.g. user= $uniqueID. This 'user' param can be passed in the request body along with other params such as prompt, max_tokens etc.
However, the CreateCompletionRequest
interface does not have an optional user
property.
Let me know if I'm missing anything or if anything else is required on my end.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.