Giter Club home page Giter Club logo

llama2-medical-chatbot's People

Contributors

aianytime avatar ali-bin-kashif avatar manas95826 avatar manjunathshiva avatar pranav-js670 avatar vimalkeshu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llama2-medical-chatbot's Issues

Code not giving output

My System Specs: 8 GB RAM, 2 GB Graphic Card

When I gave the prompt, "What is eczema"

My system keeps on loading for the response, and it also hangs.

Is it only due to my low system specs ?

Link to my code

I get double output if i use this code, dont know how to fix this?

image

This is the code copied from your repo :

@cl.on_chat_start
async def start():
chain = qa_bot()
msg = cl.Message(content="Starting the bot...")
await msg.send()
msg.content = "Hi, Welcome to School Assist Bot. What is your query?"
await msg.update()

cl.user_session.set("chain", chain)

@cl.on_message
async def main(message):
chain = cl.user_session.get("chain")
cb = cl.AsyncLangchainCallbackHandler(
stream_final_answer=True, answer_prefix_tokens=["FINAL", "ANSWER"]
)
cb.answer_reached = True
res = await chain.acall(message, callbacks=[cb])
answer = res["result"]
print(answer)
#sources = res["source_documents"]

# if sources:
#     answer += f"\nSources:" + str(sources)
#     #pass
# else:
#     answer += "\nNo sources found"

await cl.Message(content=answer).send()

Could not import ctransformers package.

I am trying to run this project on a Macbook pro m1 and am getting the following stacktrace.

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/chainlit/utils.py", line 36, in wrapper
    return await user_function(**params_values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/owencraston/src/personal/ai/demo/model.py", line 79, in start
    chain = qa_bot()
            ^^^^^^^^
  File "/Users/owencraston/src/personal/ai/demo/model.py", line 62, in qa_bot
    llm = load_llm()
          ^^^^^^^^^^
  File "/Users/owencraston/src/personal/ai/demo/model.py", line 46, in load_llm
    llm = CTransformers(
          ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/load/serializable.py", line 74, in __init__
    super().__init__(**kwargs)
  File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
  File "pydantic/main.py", line 1102, in pydantic.main.validate_model
  File "/usr/local/lib/python3.11/site-packages/langchain/llms/ctransformers.py", line 67, in validate_environment
    raise ImportError(
ImportError: Could not import `ctransformers` package. Please install it with `pip install ctransformers`

I have of course ran pip install ctransformers

I am also using python 3.11. Does anyone know what the issue could be?

a sub-function of ingest.py is not working

I am getting the following message when I run ingest.py file:

File "pydantic\main.py", line 120, in init pydantic.main
TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers'

Torch Installation Error

After running pip install -r requirements.txt, I receive this error:

Collecting pypdf (from -r requirements.txt (line 1))
Obtaining dependency information for pypdf from https://files.pythonhosted.org/packages/40/b7/166082d3b1c9d6d0b5a27184b59fc761bce81cbc0bb26b4247992cbd2117/pypdf-3.17.1-py3-none-any.whl.metadata
Downloading pypdf-3.17.1-py3-none-any.whl.metadata (7.5 kB)
Collecting langchain (from -r requirements.txt (line 2))
Obtaining dependency information for langchain from https://files.pythonhosted.org/packages/65/f4/dea10c11f0c2ebca298ba57f546940f8944e49c624239183c924ae0e1f81/langchain-0.0.339-py3-none-any.whl.metadata
Using cached langchain-0.0.339-py3-none-any.whl.metadata (16 kB)
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch

Errors

I got these two errors while running the code:

1/ Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")

2/NotImplementedError: Async generation not implemented for this LLM.

Any solution please?

AttributeError: 'Message' object has no attribute 'replace'

(chatbot) C:\Users\risha\Documents\AI\LLM Chatbot OS>chainlit run main.py
2023-11-02 10:05:42 - Your app is available at http://localhost:8000
2023-11-02 10:05:56 - Load pretrained SentenceTransformer: sentence-transformers/all-MiniLM-L6-v2
2023-11-02 10:05:59 - Loading faiss with AVX2 support.
2023-11-02 10:05:59 - Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-11-02 10:05:59 - Loading faiss.
2023-11-02 10:05:59 - Successfully loaded faiss.
2023-11-02 10:06:07 - 'Message' object has no attribute 'replace'
Traceback (most recent call last):
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\chainlit\utils.py", line 39, in wrapper
return await user_function(**params_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\Documents\AI\LLM Chatbot OS\main.py", line 92, in main
res = await chain.acall(message, callbacks=[cb])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\chains\base.py", line 379, in acall
raise e
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\chains\base.py", line 373, in acall
await self._acall(inputs, run_manager=run_manager)
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\chains\retrieval_qa\base.py", line 179, in _acall
docs = await self._aget_docs(question, run_manager=_run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\chains\retrieval_qa\base.py", line 227, in _aget_docs
return await self.retriever.aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\schema\retriever.py", line 269, in aget_relevant_documents
raise e
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\schema\retriever.py", line 262, in aget_relevant_documents
result = await self._aget_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\schema\vectorstore.py", line 677, in _aget_relevant_documents
docs = await self.vectorstore.asimilarity_search(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\vectorstores\faiss.py", line 534, in asimilarity_search
docs_and_scores = await self.asimilarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\vectorstores\faiss.py", line 421, in asimilarity_search_with_score
embedding = await self._aembed_query(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\vectorstores\faiss.py", line 159, in _aembed_query
return await self.embedding_function.aembed_query(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\schema\embeddings.py", line 25, in aembed_query
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\anaconda3\envs\chatbot\Lib\asyncio\futures.py", line 287, in await
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "C:\Users\risha\anaconda3\envs\chatbot\Lib\asyncio\tasks.py", line 339, in __wakeup
future.result()
File "C:\Users\risha\anaconda3\envs\chatbot\Lib\asyncio\futures.py", line 203, in result
raise self._exception.with_traceback(self._exception_tb)
File "C:\Users\risha\anaconda3\envs\chatbot\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\embeddings\huggingface.py", line 105, in embed_query
return self.embed_documents([text])[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\embeddings\huggingface.py", line 86, in embed_documents
texts = list(map(lambda x: x.replace("\n", " "), texts))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\risha\AppData\Roaming\Python\Python311\site-packages\langchain\embeddings\huggingface.py", line 86, in
texts = list(map(lambda x: x.replace("\n", " "), texts))
^^^^^^^^^
AttributeError: 'Message' object has no attribute 'replace'

huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-64d23a76-223c75184c82f38733ae8e33;9906dc7e-c132-423c-99c9-463af43648a8)

Repository Not Found for url: https://huggingface.co/api/models/llama-2-7b-chat.ggmlv3.q8_0.bin/revision/main.
Please make sure you specified the correct repo_id and repo_type.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.

@AIAnytime Please help me out on this issue

extra fields not permitted (type=value_error.extra) when running the model code

Hi,

Thanks for sharing!

When running the code, I get the following error:

File "/venv/lib/python3.11/site-packages/langchain/load/serializable.py", line 74, in __init__ super().__init__(**kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for RetrievalQA return_source_document extra fields not permitted (type=value_error.extra)

Im using langchain-0.0.240 and pedantic-1.10.11

Seems to be related to the return_source_document = True in RetrievalQA.from_chain_type. When I comment this argument it works without issue.

RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char*) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open vectorstore/db_faiss/index.faiss for reading: No such file or directory

Traceback (most recent call last):
File "/home/hdd/MLProjects/yellow-paper-physics-bot/models.py", line 61, in
f=final_result("Define Weight")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hdd/MLProjects/yellow-paper-physics-bot/models.py", line 57, in final_result
qa_result = qa_bot()
^^^^^^^^
File "/home/hdd/MLProjects/yellow-paper-physics-bot/models.py", line 49, in qa_bot
db = FAISS.load_local(DB_FAISS_PATH, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/varun/.local/lib/python3.11/site-packages/langchain/vectorstores/faiss.py", line 696, in load_local
index = faiss.read_index(
^^^^^^^^^^^^^^^^^
File "/home/varun/.local/lib/python3.11/site-packages/faiss/swigfaiss_avx2.py", line 10206, in read_index
return _swigfaiss_avx2.read_index(args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char
) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open vectorstore/db_faiss/index.faiss for reading: No such file or directory

Could not reach the server

Nice project and video!

As an LLM newbie I might be being too optimistic trying to run this with the Llama 2 quantized model llama-2-7b-chat.Q4_K_M.gguf on a cpu with only 8GB RAM nominal. The Chainlit page loads and after entering a question after a while it appears to timeout with the message "Could not reach the server". Should I increase the session_timeout parameter in config.toml or run some of the code in async mode?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.