linjungz / chat-with-your-doc Goto Github PK
View Code? Open in Web Editor NEWChat with your docs in PDF/PPTX/DOCX format, using LangChain and GPT4/ChatGPT from both Azure OpenAI Service and OpenAI
Chat with your docs in PDF/PPTX/DOCX format, using LangChain and GPT4/ChatGPT from both Azure OpenAI Service and OpenAI
Hi, im trying to install this on centos7, im using python 3.10 im getting this on almost all requirements not just aiofiles...
ERROR: Could not find a version that satisfies the requirement aiofiles==23.1.0 (from versions: 0.2.1, 0.3.0, 0.3.1, 0.3.2, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0)
ERROR: No matching distribution found for aiofiles==23.1.0
any idea what that is?
btw it's working fine on my windows computer but not on my VPS :(
please delete or suspend or takedown this url https://waslot.azurefd.net/, because this url phising use my brand "waslot" and redirect to another brand. Please help me sir. thanks
When running in container, there's a error message after uploading document:
AxiosError: Request failed with status code 403
Add support for steaming answer in the chat box.
Finished chain.
Finished chain.
Traceback (most recent call last):
File "/data/.venv/lib/python3.10/site-packages/gradio/routes.py", line 414, in run_predict
output = await app.get_blocks().process_api(
File "/data/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1320, in process_api
result = await self.call_function(
File "/data/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1048, in call_function
prediction = await anyio.to_thread.run_sync(
File "/data/.venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/data/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/data/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/data/chat-with-your-doc/chat_web.py", line 89, in get_answer
reference_html = f"""Reference [{i+1}] {os.path.basename(doc.metadata["source"])} P{doc.metadata['page']+1}
\n"""
KeyError: 'page'
Originally posted by @duronxx in #14 (comment)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.dict)
File "E:\aiProjects\chat-with-your-doc\chat_web_st.py", line 26, in
docChatBot.init_vector_db_from_documents([local_file_name])
File "E:\aiProjects\chat-with-your-doc\chatbot.py", line 223, in init_vector_db_from_documents
doc = loader.load_and_split(text_splitter)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\langchain\vectorstores\base.py", line 306, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\langchain\vectorstores\faiss.py", line 427, in from_texts
embeddings = embedding.embed_documents(texts)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\langchain\embeddings\openai.py", line 297, in embed_documents
return self.get_len_safe_embeddings(texts, engine=self.deployment)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\langchain\embeddings\openai.py", line 233, in get_len_safe_embeddings
response = embed_with_retry(
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\langchain\embeddings\openai.py", line 64, in embed_with_retry
return embed_with_retry(**kwargs)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\tenacity_init.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\tenacity_init.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\tenacity_init.py", line 314, in iter
return fut.result()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\concurrent\futures_base.py", line 437, in result
return self.__get_result()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\concurrent\futures_base.py", line 389, in __get_result
raise self.exception
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\tenacity_init.py", line 382, in call
result = fn(*args, **kwargs)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\langchain\embeddings\openai.py", line 62, in _embed_with_retry
return embeddings.client.create(**kwargs)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\openai\api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\openai\api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "e:\aiprojects\chat-with-your-doc.venv\lib\site-packages\openai\api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
(.venv) PS E:\chat-with-your-doc> streamlit run chat_web_st.py
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://172.16.10.176:8501
Loaded vector db from local: ./data/vector_store/index
2024-03-20 16:48:26.612 Uncaught app exception
Traceback (most recent call last):
File "E:\chat-with-your-doc.venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.dict)
File "E:\chat-with-your-doc\chat_web_st.py", line 77, in
result_answer, result_source = docChatBot.get_answer(
^^^^^^^^^^^^^^^^^^^^^^
File "E:\chat-with-your-doc\chatbot.py", line 254, in get_answer
result = self.chatchain({
^^^^^^^^^^^^^^^^
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chains\base.py", line 140, in call
raise e
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chains\base.py", line 134, in call
self._call(inputs, run_manager=run_manager)
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 104, in _call
new_question = self.question_generator.run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chains\base.py", line 239, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chains\base.py", line 140, in call
raise e
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chains\base.py", line 134, in call
self._call(inputs, run_manager=run_manager)
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chains\llm.py", line 69, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chains\llm.py", line 79, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chat_models\base.py", line 143, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chat_models\base.py", line 91, in generate
raise e
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chat_models\base.py", line 83, in generate
results = [
^
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chat_models\base.py", line 84, in
self._generate(m, stop=stop, run_manager=run_manager)
File "E:\chat-with-your-doc.venv\Lib\site-packages\langchain\chat_models\openai.py", line 321, in _generate
role = stream_resp["choices"][0]["delta"].get("role", role)
~~~~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
报错位置chatbot的get answer函数:
result = self.chatchain({
"question": query,
"chat_history": chat_history_for_chain
},
return_only_outputs=True)
Considering support for open source LLM like ChatGLM
1, Support for chat with online PDF. No need to download it and upload it to the web app manually.
Support for openai?
The streamlit is much more powerful and easier to extend more applications using pages.
Decided to change the UI to streamlit and the original one will not be maintained.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.