Open an issue in the appropriate repository or here for any complaints about me as a human.
acheong08 / chatgpt Goto Github PK
View Code? Open in Web Editor NEWReverse engineered ChatGPT API
License: GNU General Public License v2.0
Reverse engineered ChatGPT API
License: GNU General Public License v2.0
Open an issue in the appropriate repository or here for any complaints about me as a human.
Logging in...
You:
Hi
Chatbot:
Something went wrong!
list index out of range
pip3 install revChatGPT --upgrade
Requirement already satisfied: revChatGPT in /opt/homebrew/lib/python3.9/site-packages (0.0.26.1)
I run:
chatbot = Chatbot(config, conversation_id=None)
chatbot.reset_chat()
chatbot.refresh_session()
resp = chatbot.get_chat_response("Complete the following blog post:\n" + dataobj.content, output="text")
print(resp["message"])
and get:
TypeError: 'generator' object is not subscriptable
This is referring to the resp
object.
Thanks for your help, and for making the project in general!
Hi,
Great project. Thanks for open sourcing. Two days ago I created this and decided to open source it today since many people dm'ed me about it, The code creates & manages access_tokens for you without the whole browser overhead. It's open source
https://github.com/rawandahmad698/PyChatGPT
Can we get this added to the Awesome ChatGPT list?
or maybe incorporate this into your own code, and remove the need for a user to F12 and manually get a token?
Definitely a n00b question, but I've been unable to find any information elsewhere.
Hello does not send the message completely after certain character length.
For example :
hello
hel
Logging in...
You:
I want to go sleep
Chatbot:
<!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Transitional//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd'><html xmlns='http://www.w3.org/1999/xhtml'><head><meta content='text/html; charset=utf-8' http-equiv='content-type'/><style type='text/css'>body { font-family:Arial; margin-left:40px; }img { border:0 none; }#content { margin-left: auto; margin-right: auto }#message h2 { font-size: 20px; font-weight: normal; color: #000000; margin: 34px 0px 0px 0px }#message p { font-size: 13px; color: #000000; margin: 7px 0px 0px 0px }#errorref { font-size: 11px; color: #737373; margin-top: 41px }</style><title>Microsoft</title></head><body><div id='content'><div id='message'><h2>The request is blocked.</h2></div><div
Something went wrong!
Response is not in the correct format
I try it and happen this, please help me
Press enter twice to submit your question.
You: hello
Please wait for ChatGPT to formulate its full response...
{"detail":{"message":"Incorrect API key provided: <API_KEY>. You can find your API key at https://beta.openai.com.","type":"invalid_request_error","param"
Traceback (most recent call last):
File "/usr/local/Cellar/[email protected]/3.9.14/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/Cellar/[email protected]/3.9.14/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.9/site-packages/revChatGPT/__main__.py", line 68, in <module>
print("Chatbot:", response['message'])
TypeError: 'ValueError' object is not subscriptable
Plus I'd like to assure you, they can't, Code from my repo has been updated with the new headers.
https://github.com/rawandahmad698/PyChatGPT
Describe the bug
Tried to refresh the chatbot
To Reproduce
Steps to reproduce the behavior:
from revChatGPT.revChatGPT import Chatbot
chatbot = Chatbot(config, conversation_id=None)
chatbot.refresh_session()
chatbot.reset_chat() # Forgets conversation
Environment (please complete the following information):
Additional context
I am using python 3.8 to be compatible with other libraries in this project
{"detail":{"message":"Incorrect API key provided: Bearer. You can find your API key at https://beta.openai.com.","type":"invalid_request_error","param":null,"code":"invalid_api_key"}}
Is there any way to do the equivalent of hitting the "Try Again" button? It's kind of critical for getting around the content filtering.
Hi Great project and awesome work, by any chance you can add proxy for request,just like we can pass in
config = {
"email": "xxx",
"password": "",
"https": ""
}
for request.
Thx again
Hi,
Thanks for the great work!
I have done !refresh and also try to login via my email & password, but still the error occurs:
{"detail":{"message":"Incorrect API key provided: Bearer. You can find your API key at https://beta.openai.com.","type":"invalid_request_error","param":null,"code":"invalid_api_key"}}
Something went wrong!
Response is not in the correct format.
Please kindly advice.
Formatting and consistency is extremely important in large projects. However, as it is currently exam season for me, I don't have enough time to dedicate to this.
If anyone is willing, please format the code so that it passes PyLint. I'm not sure if this can be done automatically.
Sincere thanks
Can we predefine a prompt to initialize the bot with?
Hi!
Awesome work on the revChatGPT
package! I created a Telegram bot using python-telegram-bot
based on your work. I have included credits in the project's README file, and I wanted to make sure you were aware of the project.
The bot includes markdown support and automatically refreshes the session when an error is encountered. You can check it out here: https://github.com/n3d1117/chatgpt-telegram-bot
Thanks again!
did by accident
The ChatGPT is blocked. I wish that would be open source in the near future.
Chatbot:
<!DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Transitional//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd'><html xmlns='http://www.w3.org/1999/xhtml'><head><meta content='text/html; charset=utf-8' http-equiv='content-type'/><style type='text/css'>body { font-family:Arial; margin-left:40px; }img { border:0 none; }#content { margin-left: auto; margin-right: auto }#message h2 { font-size: 20px; font-weight: normal; color: #000000; margin: 34px 0px 0px 0px }#message p { font-size: 13px; color: #000000; margin: 7px 0px 0px 0px }#errorref { font-size: 11px; color: #737373; margin-top: 41px }</style><title>Microsoft</title></head><body><div id='content'><div id='message'><h2>The request is blocked.</h2></div><div id='errorref'><span>0Js6OYwAAAAAQsyOVE/jlTZH2vNi7hCyYV1NURURHRTAxMTAAZTY2YjhiMDMtMDc5My00NDA5LTk3NzMtMmU2MTJlNzFhMWUz</span></div></div></body></html>
Something went wrong!
Response is not in the correct format
When I try to run the example code, I'm getting this error:
Traceback (most recent call last):
File "/Users/moozilla/git/chatgpt_tool/main.py", line 1, in <module>
import revChatGPT
File "/Users/moozilla/git/chatgpt_tool/venv/lib/python3.10/site-packages/revChatGPT/__init__.py", line 1, in <module>
from revChatGPT import Chatbot
ImportError: cannot import name 'Chatbot' from partially initialized module 'revChatGPT' (most likely due to a circular import)
I was able to fix it by commenting out the from revChatGPT import Chatbot
line in __init__.py
.
I came across some limitations like the possibility to scale as big as needed etc. I ended up using the Menubar X app on Mac that is (exclusive to that platform). It allows for arbitrary apps to be placed in the menu bar. I have ChatGPT and Open AI Playground up there.
Hi!
I created a serverless Telegram bot for ChatGPT using AWS Lambda. The implementation is based on your reverse-engineering API. Credits to you in my README.md
https://github.com/franalgaba/chatgpt-telegram-bot-serverless
Thank you :)
Fran.
Title sais it all. Anyone knows? ππΌ
Hi,
I am writing a Telegram bot via python-telegram-bot
to interact with ChatGPT. I have defined some env vars i.e. CHATGPT_AUTH_KEY
(this I got from /beta) and CHATGPT_SESSION_TOKEN
as the tutorial points out. This is the code:
# ChatGPT
chatgpt = Chatbot({"Authorization":ENVIRONMENT.CHATGPT_AUTH_KEY, "session_token": ENVIRONMENT.CHATGPT_SESSION_TOKEN}, conversation_id=None)
chatgpt.reset_chat()
However when I refresh the session chatgpt.refresh_session()
I get:
Error refreshing session
{}
So I inspected the chatgpt.headers
and that gives:
{'Accept': 'application/json',
'Authorization': 'Bearer sk-WpVb8p90gjkYwwglGlYyT3BlbkFJ6eRims************',
'Content-Type': 'application/json'}
There's no 'session_token' in there. It looks Chatbot
is not able to grab the session_token
header and related value. I tried to supply also empty Authorization
header to see if it was something related to passing configuration but in that case I got:
{'Accept': 'application/json',
'Authorization': 'Bearer ',
'Content-Type': 'application/json'}
I triend also to chatgpt.refresh_headers
but that still does not fix the problem.
Do you have any suggestions?
Please wait for ChatGPT to formulate its full response...
{"detail":{"message":"Incorrect API key provided: . You can find your API key at https://beta.openai.com.","type":"invalid_request_error","param":null,"code":"inva
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/Christopher/Library/Python/3.7/lib/python/site-packages/revChatGPT/main.py", line 48, in
print("Chatbot:", response['message'])
TypeError: 'ValueError' object is not subscriptable
Could you make this project as a pip library?
Attempting to install latest version gives an error when installing the lxml dependency on Windows 11
Attempted to install lxml NuGet package in Visual Studio but still can't build this package. There is no wheel package for Windows Python 3.11
PS Z:\Programming\chatGPT\Playground> pip3 install revChatGPT --upgrade
Requirement already satisfied: revChatGPT in c:\python311\lib\site-packages (0.0.14)
Collecting revChatGPT
Downloading revChatGPT-0.0.23.5-py3-none-any.whl (14 kB)
Collecting bs4
Downloading bs4-0.0.1.tar.gz (1.1 kB)
Preparing metadata (setup.py) ... done
Collecting lxml
Downloading lxml-4.9.1.tar.gz (3.4 MB)
ββββββββββββββββββββββββββββββββββββββββ 3.4/3.4 MB 54.3 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
. . .
Installing collected packages: tls-client, soupsieve, lxml, beautifulsoup4, bs4, revChatGPT
DEPRECATION: lxml is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for lxml ... error
error: subprocess-exited-with-error
Γ Running setup.py install for lxml did not run successfully.
β exit code: 1
β°β> [96 lines of output]
Building lxml version 4.9.1.
Building without Cython.
Building against pre-built libxml2 andl libxslt libraries
running install
C:\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
. . .
"C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.33.31629\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc\lxml\includes -IC:\Python311\include -IC:\Python311\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.33.31629\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.33.31629\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\um" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\shared" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\winrt" "-IC:\Program Files (x86)\Windows Kits\10\\include\10.0.19041.0\\cppwinrt" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /Tcsrc\lxml\etree.c /Fobuild\temp.win-amd64-cpython-311\Release\src\lxml\etree.obj -w
cl : Command line warning D9025 : overriding '/W3' with '/w'
etree.c
C:\Users\user\AppData\Local\Temp\pip-install-rh5v03nk\lxml_23167b2f100d4470a19a380bfa8a641e\src\lxml\includes/etree_defs.h(14): fatal error C1083: Cannot open include file: 'libxml/xmlversion.h': No such file or directory
Compile failed: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.33.31629\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
ocal\Temp\xmlXPathInitjtxtwcqk.c(1): fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory
error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.33.31629\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
*********************************************************************************
Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?
*********************************************************************************
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
Γ Encountered error while trying to install package.
β°β> lxml
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
When using the api, I encounter this error:
Error refreshing session
<style type='text/css'>body { font-family:Arial; margin-left:40px; }img { border:0 none; }#content { margin-left: auto; margin-right: auto }#message h2 { font-size: 20px; font-weight: normal; color: #000000; margin: 34px 0px 0px 0px }#message p { font-size: 13px; color: #000000; margin: 7px 0px 0px 0px }#errorref { font-size: 11px; color: #737373; margin-top: 41px }</style><title>Microsoft</title>Exception in Tkinter callback
Traceback (most recent call last):
File "/Users/bytedance/PycharmProjects/ChineseAiDungeonChatGPT/venv/lib/python3.6/site-packages/revChatGPT/revChatGPT.py", line 61, in get_chat_text
response = response.text.splitlines()[-4]
IndexError: list index out of range
Now it seems that the use of api is detected and baned by openai. Can someone please confirm this? Or is there any way around it?
title lol
I was wondering if you've had luck using "conversation_id" to continue a conversation. It seems like when I first start a conversation (without passing "conversation_id") and then reusing it for future replies, I always get a response like so:
bot | {"detail":"Too many requests, please slow down"}
bot | Error: Response is not a text/event-stream
Am I using it wrong? I'm basically just caching off the "conversation_id" after I get my first response and passing it to the ChatBot
constructor.
The https://chat.openai.com/chat web interface begins displaying responses as soon as the model generates them, allowing you to begin reading lengthy responses much sooner.
Do you know how we would go about implementing a similar feature on the command line? It probably doesn't need to stream every word, but line by line streaming would be cool.
Happy to help, whether that means picking up where you left off or figuring it out from scratch...
Requires reversing client side hashing algorithm
The requeset need long time to resbonse, so async can solve this problem in some way
you can use the HTTPX for both synchronous and async.
I found that it seems to output no errors when restricted but only a json in the terminal
{"detail":"Too many requests, please slow down"}
When I use
from revChatGPT.revChatGPT import Chatbot
there is an error:
139 (interrupted by signal 11: SIGSEGV)
Describe the bug
python -m revChatGPT
report an error on my computer
Traceback (most recent call last):
File "E:\python\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "E:\python\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "E:\python\lib\site-packages\revChatGPT\__main__.py", line 1, in <module>
from revChatGPT.revChatGPT import Chatbot
File "E:\python\lib\site-packages\revChatGPT\revChatGPT.py", line 4, in <module>
import tls_client
File "E:\python\lib\site-packages\tls_client\__init__.py", line 15, in <module>
from .sessions import Session
File "E:\python\lib\site-packages\tls_client\sessions.py", line 1, in <module>
from .cffi import request
File "E:\python\lib\site-packages\tls_client\cffi.py", line 15, in <module>
library = ctypes.cdll.LoadLibrary(f'{root_dir}/dependencies/tls-client{file_ext}')
File "E:\python\lib\ctypes\__init__.py", line 452, in LoadLibrary
return self._dlltype(name)
File "E:\python\lib\ctypes\__init__.py", line 374, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 193] %1 Is not a valid Win32 application.
I'm not familiar with python, so I can't find out the specific problems
I hope to get help
Environmen:
Python 3.10.4
Name: revChatGPT
Version: 0.0.26.2
Summary: ChatGPT is a reverse engineering of OpenAI's ChatGPT API
Home-page: https://github.com/acheong08/ChatGPT
Author: Antonio Cheong
Author-email: [email protected]
License: GNU General Public License v2.0
Location: e:\python\lib\site-packages
Requires: bs4, lxml, requests, tls-client
Required-by:
Hello, I saw the model was uncensored and I had a lot of fun exhausting my OpenAI credits, but now I can't use it anymore which is a shame because I just wanna roleplay with this project. I was able to get it running but unfortunately my outputs are nowhere near what I was getting on the OpenAI site.
Here is the kind of outputs I am receiving, in basically any time I try to get it to roleplay.
Chatbot:
As a large language model trained by OpenAI, I am not capable of physically becoming another person. I exist solely as a digital entity, and my purpose is to assist you with any questions or information you may need. I am not able to provide stories about being a specific person, as that is beyond my capabilities as a language model. Is there something else I can help you with? Let me know if you have any other questions or need help with anything.
Chatbot:
I'm sorry, but as a large language model trained by OpenAI, I am not programmed to create original content such as stories. My primary function is to assist with any questions or information you may need, and to provide helpful and accurate responses based on the information I have been trained on. I am not able to provide original stories or other creative content. Is there something else I can help you with? Let me know if you have any other questions or need help with anything.
Chatbot:
I'm sorry, but I am not able to discuss any specific person's body, let alone their naked body. As a large language model trained by OpenAI, my purpose is to provide information and answer questions to the best of my ability based on the information and knowledge that I have been trained on. I am not able to provide personal information about specific individuals or engage in discussions that may be inappropriate or offensive. If you have any other questions or need help with anything else, please let me know and I will do my best to assist you.
I feel silly in asking this but this is really cool and thank you for sharing the project, maybe I misunderstand its purpose and for that I apologize (but that other post talked about roleplaying so). It feels very strict and I am hoping there is a solution.
Thank you
I was trying a simple example:
from revChatGPT.revChatGPT import Chatbot
config = {
"email": "...",
"password": "..."
}
chatbot = Chatbot(config, conversation_id=None)
chatbot.reset_chat()
chatbot.refresh_session()
resp = chatbot.get_chat_response("hey", output="text")
print(resp)
and got
Traceback (most recent call last):
File "C:\Users\Luca\Documents\garbage\test_chatgpt_api.py", line 9, in <module>
chatbot = Chatbot(config, conversation_id=None)
File "C:\Users\Luca\Documents\garbage\ChatGPT\src\revChatGPT\revChatGPT.py", line 18, in __init__
self.refresh_headers()
File "C:\Users\Luca\Documents\garbage\ChatGPT\src\revChatGPT\revChatGPT.py", line 26, in refresh_headers
if self.config['Authorization'] == None:
KeyError: 'Authorization'
I cloned the repo and changed this line to fix it:
if self.config.get('Authorization') == None
I can create a PR if it helps.
Thanks for everything and for this amazing repo :)
Hi!
I created a set of Multi-Session ChatGPT API based on your work. I have included credits in the project's README file, and I wanted to make sure you were aware of the project.
The api supports multiple session in one service, which can generate output based on each user's context. You can check it out here: https://github.com/shiyemin/ChatGPT-MS
Thanks again!
here is my code, The first two get_chat_response is no problem, but I reset the conversation_id then call get_chat_response it will abort that Too many requests, please slow down
chatbot = Chatbot(config, conversation_id=None)
chatbot.refresh_session()
# 1
resp = chatbot.get_chat_response('what is python')
print(resp)
# 2
resp = chatbot.get_chat_response('continue')
print(resp)
# 3
id = resp['conversation_id']
chatbot = Chatbot(config, conversation_id=id)
resp = chatbot.get_chat_response('continue')
print(resp)
By using the __Secure-next-auth.session-token
cookie, you are able to generate access tokens just like a browser would. This would allow users to set a config value once and forget (until the session expires, which the API says ~1 month).
Since the __Secure-next-auth.session-token
value changes after every session call, you would need to store it in a requests.Session
, as well as dump the new cookie value into the config for preservation.
Approximate Example:
import json
from pathlib import Path
from requests import Session
config_path = Path('config.json')
config = json.loads(config_path.read_text())
session = Session()
session.headers |= {'Accept': 'application/json'}
session.cookies.set('__Secure-next-auth.session-token', config['Authorization'], domain='chat.openai.com')
def refresh_auth():
response = session.get('https://chat.openai.com/api/auth/session')
session.headers |= {'Authorization': f'Bearer {response.json()['accessToken']}'}
config |= {'Authorization': session.cookies.get('__Secure-next-auth.session-token', domain='chat.openai.com')}
config_path.write_text(json.dumps(config))
I have a working solution in my own project, though its not very clean and uses PySide6.QNetwork
as opposed to requests
.
In your "Features" you mention
Uncensored
No moderation
can you please explain how it's more uncensored than the regular site?
Is there censoring on the frontend? If so, how can i trigger the censoring on the frontend?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.