Comments (6)
Hey Jake! Thanks for bringing this up, I'll look into it. The above error message usually happens when OpenAI API has a longer outage or there might be issues with the key. We also have updated the package-release, which might help solve the issue, could you confirm please what version of Monkey-Patch are you running?
from tanuki.py.
Thank you so much! I was on 0.0.9 and just upgraded to 0.0.10, using python 3.9.7 and I am still facing the same issue.
I tested a separate script with the same key hitting the OpenAI API directly and it was working as well, which made me think the key wasn't the issue. Let me know if there's anything else I can test or information to provide!
openai.api_key = api_key
def generate_workouts():
prompt = {
"role": "user",
"content": "Generate a list of 100 unique exercises with the following attributes in a json response - attributes are "\
"workout name as string, description as string, equipment needed as a string list, and muscle group as a list string"
}
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages = [prompt],
max_tokens=4000,
temperature=0.7
)
return response.choices
from tanuki.py.
Hey @jakelaws1, could you please paste your full code? Below is what I am (successfully running). There are a few syntax errors in the code you posted at the top, here is what the full code should look like:
import os
import openai
from dotenv import load_dotenv
from pydantic import Field
from typing import Annotated
from monkey_patch.monkey import Monkey as monkey
load_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")
@monkey.patch
def score_sentiment(input: str) -> Annotated[int, Field(gt=0, lt=10)]:
"""
Scores the input between 0-10
"""
@monkey.align
def align_score_sentiment():
"""Register several examples to align your function"""
assert score_sentiment("I love you") == 10
assert score_sentiment("I hate you") == 0
assert score_sentiment("You're okay I guess") == 5
# This is a normal test that can be invoked
def test_score_sentiment():
"""We can test the function as normal using Pytest or Unittest"""
assert score_sentiment("I like you") == 7
if __name__ == '__main__':
align_score_sentiment()
print(score_sentiment("I like you")) # 7
print(score_sentiment("Apples might be red")) # `
(Note, I'm using python-dotenv
to load the OPENAI_API_KEY from the .env file. This is optional, you can hardcode the openai if you like)
from tanuki.py.
I copy and pasted that exact code and ran it and I am still getting the same error. I created a separate virtual environment as well to ensure incorrect packages weren't causing issues. Right after running this script I ran a separate OpenAI API script directly in that same virtual environment with success (to ensure no key issues).
Let me know if there's any additional information I can provide or things to test out.
python = "^3.11"
monkey-patch-py = "^0.0.10"
openai = "0.28.1"
Please see the errors below.
Traceback (most recent call last):
File "/Users/jacoblaws/Development/python/eleva/gpt_experiments/monkey_patch_descriptions.py", line 34, in <module>
print(score_sentiment("I like you")) # 7
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacoblaws/Library/Caches/pypoetry/virtualenvs/llmenv-ITipcQKY-py3.11/lib/python3.11/site-packages/monkey_patch/monkey.py", line 217, in wrapper
output = Monkey.language_modeler.generate(args, kwargs, Monkey.function_modeler, function_description)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacoblaws/Library/Caches/pypoetry/virtualenvs/llmenv-ITipcQKY-py3.11/lib/python3.11/site-packages/monkey_patch/language_models/language_modeler.py", line 38, in generate
choice = self.synthesise_answer(prompt, model, model_type, llm_parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacoblaws/Library/Caches/pypoetry/virtualenvs/llmenv-ITipcQKY-py3.11/lib/python3.11/site-packages/monkey_patch/language_models/language_modeler.py", line 48, in synthesise_answer
return self.api_models[model_type].generate(model, self.system_message, prompt, **llm_parameters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jacoblaws/Library/Caches/pypoetry/virtualenvs/llmenv-ITipcQKY-py3.11/lib/python3.11/site-packages/monkey_patch/language_models/openai_api.py", line 80, in generate
raise Exception("OpenAI API failed to generate a response")
Exception: OpenAI API failed to generate a response
from tanuki.py.
from tanuki.py.
Hi Jake, this is quite baffling to us as we can't reproduce the error at all. Just to double-check, the openai key in the .env file is under "OPENAI_API_KEY" key?
We also pushed up an addition where you should now get informative openai error messages apart from the generic one, which should help in debugging issues. It's not in the pip release yet (as we're including a couple of additional enhancements) but its in the master branch, if you want to fork the branch and run it then let me know what the error is! I'll keep you posted as well once we've updated the pip package.
from tanuki.py.
Related Issues (20)
- Align statements fail when pydantic object has a negative int or float property
- Support for OpenAI 1.3.5 HOT 1
- Losing Pydantic output class definition in prompt with Optional or Union output type hints HOT 3
- Automatically run tests when raising a PR using Github Actions
- Cache function invocations
- Optionally delegate classifiers to XGBoost for finetuning and inference
- Add support for local open-source models
- Support for AWS Bedrock model stack HOT 1
- Improved telemetry
- OpenAI FinetuneJob started with wrong type
- When using Bedrock models, the finetuning disabling is not working as expected
- Add S3 support for dataset reading and saving
- Embedding support for RAG usecases. HOT 1
- Track exceptions that are not caused by external dependencies HOT 1
- Output more informative OpenAI error messages
- Add support for Embeddings from AWS Bedrock
- Replace the `dict` response in `load_existing_datasets` with a Pydantic class HOT 4
- Add a top-level config parameter to disable telemetry. HOT 3
- Expired Discord group link HOT 1
- Tests fail when run in-bulk, but pass when run one-by-one
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tanuki.py.