Giter Club home page Giter Club logo

local_llama's People

Contributors

jlonge4 avatar zildj1an avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

local_llama's Issues

ImportError: cannot import name 'download_loader' from 'llama_index'

so i did a fresh install (pip install -r requirements.txt) in conda and stumbled across this error

As you might see in my profile i do not open issues that often, please tell me if i need to provide more information

Network URL: http://192.168.178.82:8501

2023-05-24 20:16:32.238 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
  exec(code, module.__dict__)
File "C:\Users\derdi\local_llama\local_llama.py", line 2, in <module>
  from llama_index import download_loader, SimpleDirectoryReader, ServiceContext, LLMPredictor, GPTVectorStoreIndex, \
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\__init__.py", line 18, in <module>
  from llama_index.indices.common.struct_store.base import SQLDocumentContextBuilder
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\__init__.py", line 4, in <module>
  from llama_index.indices.keyword_table.base import GPTKeywordTableIndex
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\keyword_table\__init__.py", line 4, in <module>
  from llama_index.indices.keyword_table.base import GPTKeywordTableIndex
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\keyword_table\base.py", line 18, in <module>
  from llama_index.indices.base import BaseGPTIndex
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\base.py", line 8, in <module>
  from llama_index.indices.base_retriever import BaseRetriever
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\base_retriever.py", line 5, in <module>
  from llama_index.indices.query.schema import QueryBundle, QueryType
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\query\__init__.py", line 3, in <module>
  from llama_index.indices.query.response_synthesis import ResponseSynthesizer
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\query\response_synthesis.py", line 5, in <module>
  from llama_index.indices.postprocessor.types import BaseNodePostprocessor
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\postprocessor\__init__.py", line 4, in <module>
  from llama_index.indices.postprocessor.node import (
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\indices\postprocessor\node.py", line 236, in <module>
  class AutoPrevNextNodePostprocessor(BasePydanticNodePostprocessor):
File "pydantic\main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 557, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 831, in pydantic.fields.ModelField.populate_validators
File "pydantic\validators.py", line 725, in find_validators
File "pydantic\dataclasses.py", line 478, in make_dataclass_validator
File "pydantic\dataclasses.py", line 231, in pydantic.dataclasses.dataclass
File "pydantic\dataclasses.py", line 224, in pydantic.dataclasses.dataclass.wrap
File "pydantic\dataclasses.py", line 347, in pydantic.dataclasses._add_pydantic_validation_attributes
File "pydantic\dataclasses.py", line 400, in pydantic.dataclasses.create_pydantic_model_from_dataclass
File "pydantic\main.py", line 1026, in pydantic.main.create_model
File "pydantic\main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 639, in pydantic.fields.ModelField._type_analysis
File "C:\Users\derdi\.conda\envs\quanization\lib\typing.py", line 1498, in __instancecheck__
  raise TypeError("Instance and class checks can only be used with"
TypeError: Instance and class checks can only be used with @runtime_checkable protocols
2023-05-24 20:16:32.512 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
  exec(code, module.__dict__)
File "C:\Users\derdi\local_llama\local_llama.py", line 2, in <module>
  from llama_index import download_loader, SimpleDirectoryReader, ServiceContext, LLMPredictor, GPTVectorStoreIndex, \
ImportError: cannot import name 'download_loader' from 'llama_index' (C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\__init__.py)
2023-05-24 20:16:32.514 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
  exec(code, module.__dict__)
File "C:\Users\derdi\local_llama\local_llama.py", line 2, in <module>
  from llama_index import download_loader, SimpleDirectoryReader, ServiceContext, LLMPredictor, GPTVectorStoreIndex, \
ImportError: cannot import name 'download_loader' from 'llama_index' (C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\__init__.py)
2023-05-24 20:16:32.596 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\derdi\.conda\envs\quanization\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
  exec(code, module.__dict__)
File "C:\Users\derdi\local_llama\local_llama.py", line 2, in <module>
  from llama_index import download_loader, SimpleDirectoryReader, ServiceContext, LLMPredictor, GPTVectorStoreIndex, \
ImportError: cannot import name 'download_loader' from 'llama_index' (C:\Users\derdi\.conda\envs\quanization\lib\site-packages\llama_index\__init__.py)

Edit: created a ticket run-llama/llama_index#3869

FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/../GPT_INDEXES/None/docstore.json'

Whenever I submit a prompt after attaching a pdf file I get this error

FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/avashish/GPT_INDEXES/None/docstore.json'
Traceback:
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.dict)
File "C:\Users\avashish\local_llama2.py", line 143, in
query_index(query_u=user_input)
File "C:\Users\avashish\local_llama2.py", line 86, in query_index
storage_context = StorageContext.from_defaults(persist_dir=f"{PATH_TO_INDEXES}/{pdf_to_use}")
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\llama_index\storage\storage_context.py", line 75, in from_defaults
docstore = docstore or SimpleDocumentStore.from_persist_dir(
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\llama_index\storage\docstore\simple_docstore.py", line 57, in from_persist_dir
return cls.from_persist_path(persist_path, namespace=namespace, fs=fs)
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\llama_index\storage\docstore\simple_docstore.py", line 75, in from_persist_path
simple_kvstore = SimpleKVStore.from_persist_path(persist_path, fs=fs)
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\llama_index\storage\kvstore\simple_kvstore.py", line 75, in from_persist_path
with fs.open(persist_path, "rb") as f:
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\fsspec\spec.py", line 1241, in open
f = self._open(
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\fsspec\implementations\local.py", line 184, in _open
return LocalFileOpener(path, mode, fs=self, **kwargs)
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\fsspec\implementations\local.py", line 315, in init
self._open()
File "C:\Users\avashish\AppData\Local\anaconda3\lib\site-packages\fsspec\implementations\local.py", line 320, in _open
self.f = open(self.path, mode=self.mode)

FileNotFoundError and model LLM location

Hi! thank you very much!
when I try to download a pdf, I get an error. can you please tell me what can be done here?

FileNotFoundError: [Errno 2] No such file or directory: 'WHERE YOUR PDFS ARE (SINGLE DIRECTORY)/Bruce_Bruce_2018_Practical Statistics for Data Scientists.pdf'
Traceback:
File "/usr/local/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
File "/Users/user/GitHub/local_llama/local_llama.py", line 152, in <module>
    save_pdf(file.name)
File "/Users/user/GitHub/local_llama/local_llama.py", line 103, in save_pdf
    pdf_to_index(pdf_path=f'{PATH_TO_PDFS}/{file}', save_path=f'{PATH_TO_INDEXES}/{file}')
File "/Users/user/GitHub/local_llama/local_llama.py", line 69, in pdf_to_index
    documents = loader.load_data(file=Path(pdf_path))
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/llama_index/readers/llamahub_modules/file/pdf/base.py", line 19, in load_data
    with open(file, "rb") as fp:
         ^^^^^^^^^^^^^^^^

and where to specify the path to the LLM model?

m2 error upon first running

streamlit.errors.StreamlitAPIException: set_page_config() can only be called once per app page, and must be called as the first Streamlit command in your script.
is thrown after following instructions and filling the env_vars.
looked at the code and it does seem like you are calling it correctly, at least to my untrained eye, so a bit unsure where that's coming from?

problem with ollama

Hi,

First, this is a great project. I love it!

I tried to run the v3 as I installed a few LLMs with ollama (which works fine). But I keep hitting this error:
ValueError: The number of documents in the SQL database (229) doesn't match the number of embeddings in FAISS (0). Make sure your FAISS configuration file points to the same database that you used when you saved the original index.

This happens when I ask any question. It does not change if I upload a document or not. Both give the same error.
I checked and ollama is running on port 11434 (the default)

For info, I'm on fedora with Python 3.10.13 in a venv.

ValueError: Requested tokens exceed context window of 512

I tried using this with a on a paper (10.1159/000346379) but asking "what is dialysis?" instantly crashes. I am using wizardLM-7B.ggmlv3.q4_0.bin

python -m streamlit run local_llama.py

  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501
  Network URL: http://192.168.178.82:8501

Achieving high convective volumes in online HDF.pdf
llama.cpp: loading model from D:\wizardLM-7B.ggmlv3.q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32001
llama_model_load_internal: n_ctx      = 512
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.07 MB
llama_model_load_internal: mem required  = 5407.72 MB (+ 1026.00 MB per state)
.
llama_init_from_file: kv self size  =  256.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
llama_tokenize: too many tokens
2023-05-24 21:55:06.995 Uncaught app exception
Traceback (most recent call last):
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
  File "C:\Users\derdi\local_llama\local_llama.py", line 139, in <module>
    query_index(query_u=user_input)
  File "C:\Users\derdi\local_llama\local_llama.py", line 85, in query_index
    response = query_engine.query(query_u)
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\indices\query\base.py", line 18, in query
    return self._query(str_or_query_bundle)
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\query_engine\retriever_query_engine.py", line 145, in _query
    response = self._response_synthesizer.synthesize(
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\indices\query\response_synthesis.py", line 163, in synthesize
    response_str = self._response_builder.get_response(
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\indices\response\compact_and_refine.py", line 57, in get_response
    response = super().get_response(
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\token_counter\token_counter.py", line 78, in wrapped_llm_predict
    f_return_val = f(_self, *args, **kwargs)
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\indices\response\refine.py", line 52, in get_response
    response = self._give_response_single(
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\indices\response\refine.py", line 89, in _give_response_single
    ) = self._service_context.llm_predictor.predict(
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\llm_predictor\base.py", line 244, in predict
    llm_prediction = self._predict(prompt, **prompt_args)
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\llm_predictor\base.py", line 212, in _predict
    llm_prediction = retry_on_exceptions_with_backoff(
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\utils.py", line 177, in retry_on_exceptions_with_backoff
    return lambda_fn()
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_index\llm_predictor\base.py", line 213, in <lambda>
    lambda: llm_chain.predict(**full_prompt_args),
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\chains\llm.py", line 213, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\chains\base.py", line 140, in __call__
    raise e
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\chains\base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\chains\llm.py", line 69, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\chains\llm.py", line 79, in generate
    return self.llm.generate_prompt(
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\llms\base.py", line 134, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\llms\base.py", line 191, in generate
    raise e
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\llms\base.py", line 185, in generate
    self._generate(prompts, stop=stop, run_manager=run_manager)
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\langchain\llms\base.py", line 438, in _generate
    else self._call(prompt, stop=stop)
  File "C:\Users\derdi\local_llama\local_llama.py", line 39, in _call
    output = llm(f"Q: {prompt} A: ", max_tokens=256,
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_cpp\llama.py", line 1101, in __call__
    return self.create_completion(
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_cpp\llama.py", line 1055, in create_completion
    completion: Completion = next(completion_or_chunks)  # type: ignore
  File "C:\Users\derdi\.conda\envs\llama_local_pdf_stuff\lib\site-packages\llama_cpp\llama.py", line 658, in _create_completion
    raise ValueError(
ValueError: Requested tokens exceed context window of 512

ERROR: Failed building wheel for faiss-cpu

Hello, trying to setup the project, running into issues.

Previous issues that seem related

Maybe this should be an issue for the upstream library itself.

Maybe these are related?

System Config

  • Running on a M2 Max machine.
  • 64GB RAM
  • MacOS Sonoma 14.4

Steps to repro the error

I have ollama running in a docker container and can be accessed at http://localhost:11434.

# copy the repo
git clone https://github.com/jlonge4/local_llama.git

# root dir
cd local_llama

# check python version
❯ python3 --version
Python 3.12.2

# create a python virtual env
python3 -m venv llama

# activate env
source llama/bin/activate

# fulfill requirements
python3 -m pip install -r requirements.txt

Failure

Error Stacktrace:

Using cached cryptography-42.0.5-cp39-abi3-macosx_10_12_universal2.whl (5.9 MB)
Using cached cffi-1.16.0-cp312-cp312-macosx_11_0_arm64.whl (177 kB)
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Building wheels for collected packages: faiss-cpu
  Building wheel for faiss-cpu (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for faiss-cpu (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [8 lines of output]
      running bdist_wheel
      running build
      running build_py
      running build_ext
      building 'faiss._swigfaiss' extension
      swigging faiss/faiss/python/swigfaiss.i to faiss/faiss/python/swigfaiss_wrap.cpp
      swig -python -c++ -Doverride= -I/usr/local/include -Ifaiss -doxygen -o faiss/faiss/python/swigfaiss_wrap.cpp faiss/faiss/python/swigfaiss.i
      error: command 'swig' failed: No such file or directory
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for faiss-cpu
Failed to build faiss-cpu
ERROR: Could not build wheels for faiss-cpu, which is required to install pyproject.toml-based projects

Anything that I could do to fix it?

Thank you.

chunk_overlap_ratio must be a float between 0. and 1

Using macOS Monterrey (v12.6), I run:

$ python -m streamlit run local_llama.py

and get the error:

ValueError: chunk_overlap_ratio must be a float between 0. and 1.
Traceback:

File "/path/script_runner.py", line 552, in _run_script
    exec(code, module.__dict__)
File "/path/local_llama.py", line 30, in <module>
    prompt_helper = PromptHelper(max_input_size, num_output, max_chunk_overlap)
File "/path/prompt_helper.py", line 72, in __init__
    raise ValueError("chunk_overlap_ratio must be a float between 0. and 1.")

Made with Streamlit

I have a fix and plan to do a pull request.

Awsome project!

This is an awsome project. I pull the code and get it up running quickly.

Do you have any idea how to improve the query results from my uploaded documents? or how to fine tune the LLM based on my updated documents? Any suggestions to improve the search speed?

Thanks
Kevin

OSError: [WinError -529697949] Windows Error 0xe06d7363

I tried the following models:

MODEL_NAME = 'ggml-vicuna-7b-q4_0.bin'
MODEL_PATH = r"D:\\ggml-vicuna-7b-q4_0.bin"
MODEL_NAME = 'GPT4All-13B-snoozy.ggmlv3.q4_1.bin'
MODEL_PATH = r"D:\\GPT4All-13B-snoozy.ggmlv3.q4_1.bin"
MODEL_NAME = 'ggml-old-vic7b-q4_0.bin'
MODEL_PATH = r"C:\\Users\\elnuevo\\Downloads\\ggml-old-vic7b-q4_0.bin"

But only the GPT4All models seems to work, as it did not crash but took forever to deliver an answer, so i still aborted.

(local_llama_newpythno) C:\Users\elnuevo\local_llama>python -m streamlit run local_llama.py

  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501
  Network URL: http://192.168.178.35:8501

A Review Article Access Recirculation Among End Stage Renal Disease Patients Undergoing Maintenance Hemodialysis.pdf
llama.cpp: loading model from C:\\Users\\elnuevo\\Downloads\\ggml-old-vic7b-q4_0.bin
2023-05-26 16:05:27.237 Uncaught app exception
Traceback (most recent call last):
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
  File "C:\Users\elnuevo\local_llama\local_llama.py", line 146, in <module>
    query_index(query_u=user_input)
  File "C:\Users\elnuevo\local_llama\local_llama.py", line 92, in query_index
    response = query_engine.query(query_u)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\query\base.py", line 23, in query
    response = self._query(str_or_query_bundle)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\query_engine\retriever_query_engine.py", line 145, in _query
    response = self._response_synthesizer.synthesize(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\query\response_synthesis.py", line 178, in synthesize
    response_str = self._response_builder.get_response(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\response\compact_and_refine.py", line 57, in get_response
    response = super().get_response(
               ^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\token_counter\token_counter.py", line 78, in wrapped_llm_predict
    f_return_val = f(_self, *args, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\response\refine.py", line 52, in get_response
    response = self._give_response_single(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\indices\response\refine.py", line 89, in _give_response_single
    ) = self._service_context.llm_predictor.predict(
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\llm_predictor\base.py", line 245, in predict
    llm_prediction = self._predict(prompt, **prompt_args)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\llm_predictor\base.py", line 213, in _predict
    llm_prediction = retry_on_exceptions_with_backoff(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\utils.py", line 177, in retry_on_exceptions_with_backoff
    return lambda_fn()
           ^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_index\llm_predictor\base.py", line 214, in <lambda>
    lambda: llm_chain.predict(**full_prompt_args),
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\llm.py", line 213, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\base.py", line 140, in __call__
    raise e
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\base.py", line 134, in __call__
    self._call(inputs, run_manager=run_manager)
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\llm.py", line 69, in _call
    response = self.generate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\chains\llm.py", line 79, in generate
    return self.llm.generate_prompt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\llms\base.py", line 134, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\llms\base.py", line 191, in generate
    raise e
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\llms\base.py", line 185, in generate
    self._generate(prompts, stop=stop, run_manager=run_manager)
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\langchain\llms\base.py", line 438, in _generate
    else self._call(prompt, stop=stop)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\elnuevo\local_llama\local_llama.py", line 44, in _call
    llm = Llama(model_path=MODEL_PATH, n_threads=NUM_THREADS, n_ctx=n_ctx)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_cpp\llama.py", line 158, in __init__
    self.ctx = llama_cpp.llama_init_from_file(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\anaconda\envs\local_llama_newpythno\Lib\site-packages\llama_cpp\llama_cpp.py", line 262, in llama_init_from_file
    return _lib.llama_init_from_file(path_model, params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError -529697949] Windows Error 0xe06d7363

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.