sahbic / profile-gpt Goto Github PK
View Code? Open in Web Editor NEWAn AI-driven tool to analyze your profile and gain insights into how ChatGPT interprets your personality.
License: MIT License
An AI-driven tool to analyze your profile and gain insights into how ChatGPT interprets your personality.
License: MIT License
AttributeError: 'NoneType' object has no attribute 'startswith'
Traceback:
File "/usr/local/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/profile-gpt/dashboard.py", line 60, in <module>
num_tokens = functions.get_number_of_tokens(str(processed_messages), FAST_MODEL)
File "/profile-gpt/utils/functions.py", line 91, in get_number_of_tokens
encoding = tiktoken.encoding_for_model(model_name)
File "/usr/local/lib/python3.8/site-packages/tiktoken/model.py", line 66, in encoding_for_model
if model_name.startswith(model_prefix):
Error: cannot access local variable 'sampled_conv_messages' where it is not associated with a value
Traceback:
File "/usr/local/lib/python3.9/site-packages/streamlit/runtimescript_runner.py", line 565, in _run_script
exec(code, module.dict)
File "/home/user/projects/GPT/dashboard.py", line 59, in
processed_messages = preprocess_messages(user_messages, total_desired_number_of_messages, number_of_words_max_per_message)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/projects/GPT/utils/functions.py", line 83, in preprocess_messages
sampled_messages.append(sampled_conv_messages)
^^^^^^^^^^^^^^^^^^^^^
Hi,
I've this issue :
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Traceback:
File "C:\Users\Seb\AppData\Roaming\Python\Python310\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "C:\Users\Seb\Documents\GitHub\profile-gpt\dashboard.py", line 84, in
psychoanalyst_response_1 = llm_utils.prompt_chat(agents.psychoanalyst, prompt1, model=FAST_MODEL).replace("# ", "### ")
File "C:\Users\Seb\Documents\GitHub\profile-gpt\utils\llm_utils.py", line 28, in prompt_chat
response = openai.ChatCompletion.create(
File "C:\Users\Seb\AppData\Roaming\Python\Python310\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "C:\Users\Seb\AppData\Roaming\Python\Python310\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "C:\Users\Seb\AppData\Roaming\Python\Python310\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "C:\Users\Seb\AppData\Roaming\Python\Python310\site-packages\openai\api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "C:\Users\Seb\AppData\Roaming\Python\Python310\site-packages\openai\api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(
--
'cause I've paied for the chatGPT4 access I've try to change this line in .env
FAST_MODEL=gpt-4
And also try to set these values for these variables in dashboard.py :
num_tokens_max_conversations = 300000
total_desired_number_of_messages = 24000
number_of_words_max_per_message = 6500
(Just and à 0 at end of each)
But problem still exist.
Any ideas ?
My .zip weight 0.9 Mo
1/4 The user profile analysis is ready ✅
2/4 The personality test is ready ✅
3/4 The future prediction is ready ✅
InvalidRequestError: This model's maximum context length is 8192 tokens. However, your messages resulted in 24140 tokens. Please reduce the length of the messages.
Traceback:
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/dashboard.py", line 98, in <module>
website_data, urls = commands.stalk_user(user_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/commands.py", line 50, in stalk_user
website_data.append(browse_website(url, f"Extract information about the user {user_name} in a paragraph of 3 sentences."))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/commands.py", line 32, in browse_website
summary = get_text_summary(url, question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/commands.py", line 27, in get_text_summary
summary = summarize_text(text, question)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/browse.py", line 150, in summarize_text
summary = create_chat_completion(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/utils/llm_utils.py", line 14, in create_chat_completion
response = openai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "/Users/florin/Documents/GITHUB_PROJECTS/profile-gpt/venv/lib/python3.11/site-packages/openai/api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(
The prompts might be too long for OpenAI gpt-3.5-turbo constraints (4097 tokens)
Traceback (most recent call last):
File "/Users/me/profile_gpt/venv_py3/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "/Users/me/profile_gpt/profile-gpt/dashboard.py", line 59, in
processed_messages = functions.preprocess_messages(user_messages, total_desired_number_of_messages, number_of_words_max_per_message)
File "/Users/me/profile_gpt/profile-gpt/utils/functions.py", line 83, in preprocess_messages
sampled_messages.append(sampled_conv_messages)
UnboundLocalError: local variable 'sampled_conv_messages' referenced before assignment
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.