spellcraftai / oaib Goto Github PK
View Code? Open in Web Editor NEWUse the OpenAI Batch tool to make async batch requests to the OpenAI API.
License: MIT License
Use the OpenAI Batch tool to make async batch requests to the OpenAI API.
License: MIT License
hi, tried this out today and found that both Batch and Auto aren't functioning properly for chat requests with longer messages. here's an adapted example from the existing Auto test:
import oaib
import pytest
@pytest.mark.asyncio
async def test_auto():
batch = oaib.Auto()
n = 20
m = 20
for i in range(n):
await batch.add("chat.completions.create", model="gpt-4", messages=[{"role": "user", "content": "say hello " * m}])
chats = await batch.run()
assert len(chats) == n, f"Chat batch should return {n} results, got {len(chats)}"
print(chats)
chat = chats.iloc[0].get("result")
assert chat['choices'], "Should get valid chat completions"
for m~=20
, this test gets pretty flaky. in practice I was trying to use it with messages of >1000 tokens, which fails more reliably
Really cool tool, but is it possible to chain calls. I.e. batch a first call, process the result and then add a second call to the batch to act on the result?
I saw that there are callbacks, but not sure if that can be used to do this in a clean way.
Rate limit headers don't behave exactly the same as OpenAI's API, no time to add right now, good first PR or will get to it later.
From Azure:
MAIN | 2024-03-02 13:35:10 | HEADERS | {'cache-control': 'no-cache, must-revalidate', 'content-length': '831', 'content-type': 'application/json', 'access-control-allow-origin': '*', 'apim-request-id': 'bb943c68-99ed-46ab-bda9-a5efbdf5897a', 'strict-transport-security': 'max-age=31536000; includeSubDomains; preload', 'x-content-type-options': 'nosniff', 'x-ms-region': 'North Central US', 'x-ratelimit-remaining-requests': '79', 'x-ratelimit-remaining-tokens': '61353', 'x-accel-buffering': 'no', 'x-request-id': '3a8b1a44-a02c-4a07-b51d-1c88acec4c4f', 'x-ms-client-request-id': 'bb943c68-99ed-46ab-bda9-a5efbdf5897a', 'azureml-model-session': 'd008-20240215231538', 'date': 'Sat, 02 Mar 2024 21:35:09 GMT'}
Seems wholly reliant on x-ratelimit-remaining-requests
, where the library relies on OpenAI's x-ratelimit-limit-tokens
headers, since their remaining-requests
and remaining-tokens
values are cached for long periods and do not update.
Hi, thanks for the library, it's very useful! I am just wondering, if the order of the completions is guaranteed to be preserved (i.e., in the same order as the added prompts)? Or would I need to use the metadata
dict to pass some id?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.