Giter Club home page Giter Club logo

chatchat-space / langchain-chatchat Goto Github PK

View Code? Open in Web Editor NEW
27.5K 264.0 4.8K 106.71 MB

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM 等语言模型的本地知识库问答 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM) QA app with langchain

License: Apache License 2.0

Python 99.78% Shell 0.04% Dockerfile 0.18%
chatglm langchain llm knowledge-base llama chatbot chatgpt embedding faiss fastchat

langchain-chatchat's Introduction

🌍 READ THIS IN ENGLISH 🌍 日本語で読む

📃 LangChain-Chatchat (原 Langchain-ChatGLM)

基于 ChatGLM 等大语言模型与 Langchain 等应用框架实现,开源、可离线部署的检索增强生成(RAG)大模型知识库项目。

⚠️ 重要提示

0.2.10将会是0.2.x系列的最后一个版本,0.2.x系列版本将会停止更新和技术支持,全力研发具有更强应用性的 Langchain-Chatchat 0.3.x0.2.10 的后续 bug 修复将会直接推送到master分支,而不再进行版本更新。


目录

介绍

🤖️ 一种利用 langchain **实现的基于本地知识库的问答应用,目标期望建立一套对中文场景与开源模型支持友好、可离线运行的知识库问答解决方案。

💡 受 GanymedeNil 的项目 document.aiAlexZhangji 创建的 ChatGLM-6B Pull Request 启发,建立了全流程可使用开源模型实现的本地知识库问答应用。本项目的最新版本中通过使用 FastChat 接入 Vicuna, Alpaca, LLaMA, Koala, RWKV 等模型,依托于 langchain 框架支持通过基于 FastAPI 提供的 API 调用服务,或使用基于 Streamlit 的 WebUI 进行操作。

✅ 依托于本项目支持的开源 LLM 与 Embedding 模型,本项目可实现全部使用开源模型离线私有部署。与此同时,本项目也支持 OpenAI GPT API 的调用,并将在后续持续扩充对各类模型及模型 API 的接入。

⛓️ 本项目实现原理如下图所示,过程包括加载文件 -> 读取文本 -> 文本分割 -> 文本向量化 -> 问句向量化 -> 在文本向量中匹配出与问句向量最相似的 top k个 -> 匹配出的文本作为上下文和问题一起添加到 prompt中 -> 提交给 LLM生成回答。

📺 原理介绍视频

实现原理图

从文档处理角度来看,实现流程如下:

实现原理图2

🚩 本项目未涉及微调、训练过程,但可利用微调或训练对本项目效果进行优化。

🌐 AutoDL 镜像0.2.10

版本所使用代码已更新至本项目 v0.2.10 版本。

🐳 Docker 镜像 已经更新到 0.2.10 版本。

🌲 本次更新后同时支持DockerHub、阿里云、腾讯云镜像源:

docker run -d --gpus all -p 80:8501 isafetech/chatchat:0.2.10
docker run -d --gpus all -p 80:8501 uswccr.ccs.tencentyun.com/chatchat/chatchat:0.2.10
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.10

🧩 本项目有一个非常完整的Wiki , README只是一个简单的介绍,_ 仅仅是入门教程,能够基础运行_。 如果你想要更深入的了解本项目,或者想对本项目做出贡献。请移步 Wiki 界面

解决的痛点

该项目是一个可以实现 __完全本地化__推理的知识库增强方案, 重点解决数据安全保护,私域化部署的企业痛点。 本开源方案采用Apache License,可以免费商用,无需付费。

我们支持市面上主流的本地大语言模型和Embedding模型,支持开源的本地向量数据库。 支持列表详见Wiki

快速上手

1. 环境配置

  • 首先,确保你的机器安装了 Python 3.8 - 3.11 (我们强烈推荐使用 Python3.11)。
$ python --version
Python 3.11.7

接着,创建一个虚拟环境,并在虚拟环境内安装项目的依赖

# 拉取仓库
$ git clone https://github.com/chatchat-space/Langchain-Chatchat.git

# 进入目录
$ cd Langchain-Chatchat

# 安装全部依赖
$ pip install -r requirements.txt 
$ pip install -r requirements_api.txt
$ pip install -r requirements_webui.txt  

# 默认依赖包括基本运行环境(FAISS向量库)。如果要使用 milvus/pg_vector 等向量库,请将 requirements.txt 中相应依赖取消注释再安装。

请注意,LangChain-Chatchat 0.2.x 系列是针对 Langchain 0.0.x 系列版本的,如果你使用的是 Langchain 0.1.x 系列版本,需要降级您的Langchain版本。

2. 模型下载

如需在本地或离线环境下运行本项目,需要首先将项目所需的模型下载至本地,通常开源 LLM 与 Embedding 模型可以从 HuggingFace 下载。

以本项目中默认使用的 LLM 模型 THUDM/ChatGLM3-6B 与 Embedding 模型 BAAI/bge-large-zh 为例:

下载模型需要先安装 Git LFS ,然后运行

$ git lfs install
$ git clone https://huggingface.co/THUDM/chatglm3-6b
$ git clone https://huggingface.co/BAAI/bge-large-zh

3. 初始化知识库和配置文件

按照下列方式初始化自己的知识库和简单的复制配置文件

$ python copy_config_example.py
$ python init_database.py --recreate-vs

4. 一键启动

按照以下命令启动项目

$ python startup.py -a

5. 启动界面示例

如果正常启动,你将能看到以下界面

  1. FastAPI Docs 界面

  1. Web UI 启动界面示例:
  • Web UI 对话界面:

img

  • Web UI 知识库管理页面:

注意

以上方式只是为了快速上手,如果需要更多的功能和自定义启动方式 ,请参考Wiki


项目里程碑

  • 2023年4月: Langchain-ChatGLM 0.1.0 发布,支持基于 ChatGLM-6B 模型的本地知识库问答。

  • 2023年8月: Langchain-ChatGLM 改名为 Langchain-Chatchat0.2.0 发布,使用 fastchat 作为模型加载方案,支持更多的模型和数据库。

  • 2023年10月: Langchain-Chatchat 0.2.5 发布,推出 Agent 内容,开源项目在Founder Park & Zhipu AI & Zilliz 举办的黑客马拉松获得三等奖。

  • 2023年12月: Langchain-Chatchat 开源项目获得超过 20K stars.

  • 2024年1月: LangChain 0.1.x 推出,Langchain-Chatchat 0.2.x 发布稳定版本0.2.10 后将停止更新和技术支持,全力研发具有更强应用性的 Langchain-Chatchat 0.3.x

  • 🔥 让我们一起期待未来 Chatchat 的故事 ···


联系我们

Telegram

Telegram

项目交流群

二维码

🎉 Langchain-Chatchat 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。

公众号

二维码

🎉 Langchain-Chatchat 项目官方公众号,欢迎扫码关注。

langchain-chatchat's People

Contributors

bones-zhu avatar calcitem avatar changxubo avatar chinainfant avatar eltociear avatar fengyunzaidushi avatar fxjhello avatar gaoyuanzero avatar glide-the avatar hzg0601 avatar hzhaoy avatar imclumsypanda avatar inksong avatar keenzhu avatar kztao avatar liangtongt avatar lijia0 avatar liunux4odoo avatar margox avatar qiankunli avatar showmecodett avatar sysalong avatar xldistance avatar yawudede avatar ykk648 avatar yuehua-s avatar zhenkaivip avatar zqt996 avatar zqtgit avatar zrzrzrzrzrzrzr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain-chatchat's Issues

The FileType.UNK file type is not supported in partition. 解决办法

ValueError: Invalid file /home/yawu/Documents/langchain-ChatGLM-master/data. The FileType.UNK file type is not supported in partition.
这个报错是因为输入的filepath不正确。demo中需要给一个具体的目录+文件名,而不是文件夹名称。

希望对大家有所帮助。

加油~以及一些建议

加油,我认为你的方向是对的。
ui方面不妨借鉴或者配合github.com/Akegarasu/ChatGLM-webui

建议弄一个插件系统

如题弄成stable-diffusion-webui那种能装插件,再开一个存储库给使用者或插件开发,存储或下载插件。

请教一下langchain协调使用向量库和chatGLM工作的

代码里面这段是创建问答模型的,会接入ChatGLM和本地语料的向量库,langchain回答的时候是怎么个优先顺序?先搜向量库,没有再找chatglm么? 还是什么机制?
knowledge_chain = ChatVectorDBChain.from_llm(
llm=chatglm,
vectorstore=vector_store,
qa_prompt=prompt,
condense_question_prompt=new_question_prompt,
)

程序运行后一直卡住

感谢作者的付出,不过本人在运行时出现了问题,请大家帮助。
情况如下:

win10, anaconda环境,Python 3.10, 已根据 requirements.txt 安装了组件。
另外,在hugging face下载了 GanymedeNil\text2vec-large-chinese,放在这个项目根目录下。
chatglm_llm.py 也按照 ChatGLM-6B放置的路径进行了修改。

运行knowledge_based_chatglm.py 后,显存占用也正常 (这个文件使用了myml/langchain-ChatGLM 的分支,修正了之前显存占用翻倍的问题),然而虽然没有发生异常,但在输入参考文件路径之后,程序一直卡着不动,CPU有一个核心满负荷,但没有进一步输出。

控制台信息粘贴在下面,请大家指点一下是怎么回事:(有警告的那个GPU没有影响,是GTX650,实际模型运行在 P40上)

Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|██████████| 8/8 [00:22<00:00, 2.81s/it]
C:\Users\Admin\anaconda3\envs\langchain-glm\lib\site-packages\torch\cuda_init_.py:132: UserWarning:
Found GPU1 NVIDIA GeForce GTX 650 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 3.7.

warnings.warn(old_gpu_warn % (d, name, major, minor, min_arch // 10, min_arch % 10))
Input your local knowledge file path 请输入本地知识文件路径:D:\My_Doc\PyTorchProj\ChatGLM\ChatGLM-6B\README.md
No sentence-transformers model found with name GanymedeNil/text2vec-large-chinese. Creating a new one with MEAN pooling.

Demo演示无法给出输出内容

你好,测试了项目自带新闻稿示例和自行上传的一个文本,可以加载进去,但是无法给出答案,请问属于什么情况,如何解决,谢谢。PS: 1、今天早上刚下载全部代码;2、硬件服务器满足要求;3、按操作说明正常操作。

24G的显存还是爆掉了,是否支持双卡运行

RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 23.70 GiB total capacity; 22.18 GiB already allocated; 12.75 MiB free; 22.18 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

cpu上运行webui,step3 asking时报错

web运行,文件加载都正常,asking时报错

README.txt 已成功加载
Traceback (most recent call last):
File "/home/chwang/.local/lib/python3.8/site-packages/gradio/routes.py", line 394, in run_predict
output = await app.get_blocks().process_api(
File "/home/chwang/.local/lib/python3.8/site-packages/gradio/blocks.py", line 1075, in process_api
result = await self.call_function(
File "/home/chwang/.local/lib/python3.8/site-packages/gradio/blocks.py", line 884, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/chwang/.local/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/chwang/.local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/chwang/.local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "webui.py", line 31, in get_answer
resp, history = kb.get_knowledge_based_answer(
File "/repo/chaowang/AI/langchain-ChatGLM/knowledge_based_chatglm.py", line 98, in get_knowledge_based_answer
retriever=vector_store.as_retriever(search_kwargs={"k": VECTOR_SEARCH_TOP_K}),
AttributeError: 'NoneType' object has no attribute 'as_retriever'

用的in4的量化版本,推理的时候显示需要申请10Gb的显存

File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int4-qe/modeling_chatglm.py", line 581, in forward
attention_outputs = self.attention(
File "/root/miniconda3/envs/gptq/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int4-qe/modeling_chatglm.py", line 435, in forward
context_layer, present, attention_probs = attention_fn(
File "/root/.cache/huggingface/modules/transformers_modules/chatglm-6b-int4-qe/modeling_chatglm.py", line 250, in attention_fn
matmul_result = torch.empty(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.36 GiB (GPU 0; 14.76 GiB total capacity; 4.54 GiB already allocated; 9.01 GiB free; 4.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

程序报错torch.cuda.OutOfMemoryError如何解决?

报错详细信息如下:

Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|██████████| 8/8 [00:04<00:00, 1.83it/s]
Input your local knowledge file path 请输入本地知识文件路径:E:\try0.md
No sentence-transformers model found with name C:\Users\50902/.cache\torch\sentence_transformers\GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling.
Input your question 请输入问题:什么是瓜子栱?
Traceback (most recent call last):
File "E:\01进行项目#0Y0_AI_Arch\02_digital-humanities\ChatGLM-6B-main\langchain-ChatGLM-master\knowledge_based_chatglm.py", line 67, in
resp, history = get_knowledge_based_answer(query=query,
File "E:\01进行项目#0Y0_AI_Arch\02_digital-humanities\ChatGLM-6B-main\langchain-ChatGLM-master\knowledge_based_chatglm.py", line 45, in get_knowledge_based_answer
chatglm = ChatGLM()
File "E:\01进行项目#0Y0_AI_Arch\02_digital-humanities\ChatGLM-6B-main\langchain-ChatGLM-master\chatglm_llm.py", line 28, in init
super().init()
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic\main.py", line 1066, in pydantic.main.validate_model
File "pydantic\fields.py", line 439, in pydantic.fields.ModelField.get_default
File "pydantic\utils.py", line 693, in pydantic.utils.smart_deepcopy
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 146, in deepcopy
y = copier(x, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "C:\Users\50902\AppData\Local\Programs\Python\Python39\lib\copy.py", line 153, in deepcopy
y = copier(memo)
File "E:\01进行项目#0Y0_AI_Arch\02_digital-humanities\ChatGLM-6B-main\venv\lib\site-packages\torch\nn\parameter.py", line 55, in deepcopy
result = type(self)(self.data.clone(memory_format=torch.preserve_format), self.requires_grad)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.99 GiB total capacity; 22.97 GiB already allocated; 0 bytes free; 22.98 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

进程已结束,退出代码1

怎么让模型严格根据检索的数据进行回答,减少胡说八道的回答呢

举个例子:

问题

理想汽车销售的车型是什么?

检索到了一篇文章

5月10日,理想汽车正式公布了2022年第一季度财报。第一季度,公司共交付31,716辆理想ONE车型,同比增长152.1%,实现营业收入95.6亿元,同比增长167.5%。同时,一季度理想汽车净亏损1090万元,去年同期净亏损为3.60亿元。

  2022年第一季度,理想汽车收入总额为95.6亿元,较2021年第一季度的35.8亿元增加167.5%,较2021年第四季度的106.2亿元减少10.0%。其中,2022年第一季度的车辆销售收入为93.1亿元,理想汽车表示车辆销售收入较2021年第一季度增加主要归因于2022年第一季度交付车辆增加。车辆销售收入较2021年第四季度减少主要归因于受**春节假期的季节性影响,致2022年第一季度交付的车辆减少。

chatglm 的答案

理想汽车是一家新能源汽车制造商,销售的车型主要是新能源汽车,包括理想ONE、理想P7等。

怎么让 chatglm 严格根据检索的内容进行回答问题,不要胡说八道呢?

[复现问题] 构造 prompt 时从知识库中提取的文字乱码

hi,我在尝试复现 README 中的效果,也使用了 ChatGLM-6B 的 README 作为输入文本,但发现从知识库中提取的文字是乱码,导致构造的 prompt 不可用。想了解如何解决这个问题。

System: 基于以下内容,简洁和专业的来回答用户的问题。
    如果无法从中得到答案,请说 "不知道" 或 "没有足够的相关信息",不要试图编造答案。答案请使用中文。
    ----------------
    # ChatGLM-6B

[GLM-130B@ICLR 23]

[GLM@ACL 22]

Blog ¢ ð

[GitHub]

[GitHub] ¢ ð

HF Repo ¢ ð

Twitter ¢ ð
    ----------------

无法打开gradio的页面

$ python webui.py
/home/zsd/.local/lib/python3.10/site-packages/gradio/components.py:164: UserWarning: Unknown style parameter: height
warnings.warn(f"Unknown style parameter: {key}")
Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().

nltk package unable to either download or load local nltk_data folder

I'm running this project on an offline Windows Server environment so I download the Punkt and averaged_perceptron_tagger tokenizer in this directory:
'nltk_data/tokenizers/punkt/english.pickle' but I keep receiving this LookupError:

LookupError:
**********************************************************************
  Resource [93maveraged_perceptron_tagger[0m not found.
  Please use the NLTK Downloader to obtain the resource:

  [31m>>> import nltk
  >>> nltk.download('averaged_perceptron_tagger')
  [0m
  For more information see: https://www.nltk.org/data.html

  Attempted to load [93mtaggers/averaged_perceptron_tagger/averaged_perceptron_tagger.pickle[0m

  Searched in:
    - 'C:\\Users\\username/nltk_data'
    - 'C:\\Users\\username\\AppData\\Local\\Programs\\Python\\Python38\\nltk_data'
    - 'C:\\Users\\username\\AppData\\Local\\Programs\\Python\\Python38\\share\\nltk_data'
    - 'C:\\Users\\username\\AppData\\Local\\Programs\\Python\\Python38\\lib\\nltk_data'
    - 'C:\\Users\\username\\AppData\\Roaming\\nltk_data'
    - 'C:\\nltk_data'
    - 'D:\\nltk_data'
    - 'E:\\nltk_data'
**********************************************************************

I put the nltk_data file in almost all of the directories above but this error keeps coming up. How can I solve this on an offline machine?

输出answer的时间很长,是否可以把文本向量化的部分提前做好存储起来?

GPU:4090 24G显存
输入一篇5000字的文档后,输入问题根据文档输出答案,一个问题要好几分钟才显示答案,且第二个问题时就会out of memory

请问:
(1)这个效率是否正常
(2)如果正常,是否可以把文本向量化的部分提前做好存储起来?

因为输入文档路径后,会经历读取文本-文本分割-文本向量化-提问向量化-在文本向量中匹配与提问向量最相似的top k个-匹配出文本作为上下文和问题一起添加到prompt中--提交LLM生成答案。是否可以把文本向量化的部分提前做好存储起来?

When I try to run the `python knowledge_based_chatglm.py`, I got this error in macOS(M1 Max, OS 13.2)

~/Downloads/langchain-ChatGLM-master $ python knowledge_based_chatglm.py
Traceback (most recent call last):
  File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
                       ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connectionpool.py", line 386, in _make_request
    self._validate_conn(conn)
  File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
    conn.connect()
  File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connection.py", line 419, in connect
    self.sock = ssl_wrap_socket(
                ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
    ssl_sock = _ssl_wrap_socket_impl(
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
    return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 517, in wrap_socket
    return self.sslsocket_class._create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1075, in _create
    self.do_handshake()
  File "/opt/homebrew/Cellar/[email protected]/3.11.2_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1346, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/homebrew/lib/python3.11/site-packages/requests/adapters.py", line 489, in send
    resp = conn.urlopen(
           ^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/urllib3/util/retry.py", line 592, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm-6b/resolve/main/tokenizer_config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/lijin02/Downloads/langchain-ChatGLM-master/knowledge_based_chatglm.py", line 11, in <module>
    from chatglm_llm import ChatGLM
  File "/Users/lijin02/Downloads/langchain-ChatGLM-master/chatglm_llm.py", line 19, in <module>
    tokenizer = AutoTokenizer.from_pretrained(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 640, in from_pretrained
    tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 484, in get_tokenizer_config
    resolved_config_file = cached_file(
                           ^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/utils/hub.py", line 409, in cached_file
    resolved_file = hf_hub_download(
                    ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1166, in hf_hub_download
    metadata = get_hf_file_metadata(
               ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1498, in get_hf_file_metadata
    r = _request_wrapper(
        ^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 407, in _request_wrapper
    response = _request_wrapper(
               ^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 442, in _request_wrapper
    return http_backoff(
           ^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 129, in http_backoff
    response = requests.request(method=method, url=url, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/requests/sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/requests/sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/requests/adapters.py", line 563, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm-6b/resolve/main/tokenizer_config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)')))

Error using the new version with langchain

Error with the new changes:

The code is

template = """Question: {question}

Answer: Let's think step by step."""

prompt = PromptTemplate(template=template, input_variables=["question"])


local_llm = ChatGLM()

print(local_llm('What is the capital of France? '))
print(local_llm('Translate to German: How are you?'))
print(local_llm('Translate to Chinese: How are you?'))
llm_chain = LLMChain(prompt=prompt, 
                     llm=local_llm)
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████| 8/8 [00:06<00:00,  1.30it/s]
The dtype of attention mask (torch.int64) is not bool
history:  []
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/ubuntu/langchain_test2/test3_chatglp.py:35 in <module>                        │
│                                                                                                  │
│    32                                                                                            │
│    33 local_llm = ChatGLM()                                                                      │
│    34                                                                                            │
│ ❱  35 print(local_llm('What is the capital of France? '))                                        │
│    36 print(local_llm('Translate to German: How are you?'))                                      │
│    37 print(local_llm('Translate to Chinese: How are you?'))                                     │
│    38 llm_chain = LLMChain(prompt=prompt,                                                        │
│                                                                                                  │
│ /home/ubuntu/langchain_test2/.venv/lib/python3.11/site-packages/langchain/llms/base │
│ .py:246 in __call__                                                                              │
│                                                                                                  │
│   243 │                                                                                          │
│   244 │   def __call__(self, prompt: str, stop: Optional[List[str]] = None) -> str:              │
│   245 │   │   """Check Cache and run the LLM on the given prompt and input."""                   │
│ ❱ 246 │   │   return self.generate([prompt], stop=stop).generations[0][0].text                   │
│   247 │                                                                                          │
│   248 │   @property                                                                              │
│   249 │   def _identifying_params(self) -> Mapping[str, Any]:                                    │
│                                                                                                  │
│ /home/ubuntu/langchain_test2/.venv/lib/python3.11/site-packages/langchain/llms/base │
│ .py:140 in generate                                                                              │
│                                                                                                  │
│   137 │   │   │   │   output = self._generate(prompts, stop=stop)                                │
│   138 │   │   │   except (KeyboardInterrupt, Exception) as e:                                    │
│   139 │   │   │   │   self.callback_manager.on_llm_error(e, verbose=self.verbose)                │
│ ❱ 140 │   │   │   │   raise e                                                                    │
│   141 │   │   │   self.callback_manager.on_llm_end(output, verbose=self.verbose)                 │
│   142 │   │   │   return output                                                                  │
│   143 │   │   params = self.dict()                                                               │
│                                                                                                  │
│ /home/ubuntu/langchain_test2/.venv/lib/python3.11/site-packages/langchain/llms/base │
│ .py:137 in generate                                                                              │
│                                                                                                  │
│   134 │   │   │   │   {"name": self.__class__.__name__}, prompts, verbose=self.verbose           │
│   135 │   │   │   )                                                                              │
│   136 │   │   │   try:                                                                           │
│ ❱ 137 │   │   │   │   output = self._generate(prompts, stop=stop)                                │
│   138 │   │   │   except (KeyboardInterrupt, Exception) as e:                                    │
│   139 │   │   │   │   self.callback_manager.on_llm_error(e, verbose=self.verbose)                │
│   140 │   │   │   │   raise e                                                                    │
│                                                                                                  │
│ /home/ubuntu/langchain_test2/.venv/lib/python3.11/site-packages/langchain/llms/base │
│ .py:325 in _generate                                                                             │
│                                                                                                  │
│   322 │   │   generations = []                                                                   │
│   323 │   │   for prompt in prompts:                                                             │
│   324 │   │   │   text = self._call(prompt, stop=stop)                                           │
│ ❱ 325 │   │   │   generations.append([Generation(text=text)])                                    │
│   326 │   │   return LLMResult(generations=generations)                                          │
│   327 │                                                                                          │
│   328 │   async def _agenerate(                                                                  │
│                                                                                                  │
│ in pydantic.main.BaseModel.__init__:341                                                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValidationError: 1 validation error for Generation
text
  none is not an allowed value (type=type_error.none.not_allowed)

问一下chat_history的逻辑

感谢开源。
看了下chat_history的逻辑,就是把之前所有轮的历史,让chatglm合并成一个question。
这种情况下:
如果对话历史很多轮,现在问的和老早之前的历史假设没有什么关联了,那生成的新的question会不会不准确了。
那有什么办法可以知道当前聊的问题是一轮新的问题,而跟之前的问题没有关联了。

在mac m2max上抛出了ValueError: 150001 is not in list这个异常

我把chatglm_llm.py加载模型的代码改成如下
model_path = 'chatglm-6b' max_token: int = 2048 temperature: float = 0.1 top_p = 0.9 history = [] tokenizer = AutoTokenizer.from_pretrained( model_path, trust_remote_code=True ) model = ( AutoModel.from_pretrained( model_path, trust_remote_code=True) .half().to('mps') )

请问:纯cpu可以吗?

很酷的实现,极大地开拓了我的眼界!很顺利的在gpu机器上运行了
另:想在另一台cpu的服务器部署(性能还可以),想提前了解:

1、仅CPU可以运行吗?(chatglm可以,但考虑到依赖的其他项目,自己就不确定了)
2、win安装detectron2好像有点困难啊……是否有好经验?(在gpu机器安装时遇到问题,当时偷懒去ubuntu搞定了)

如何读取多个txt文档?

如题,请教一下如何读取多个txt文档?示例代码中只给了读一个文档的案例,这个input我换成string之后也只能指定一个文档,无法用通配符指定多个文档,也无法传入多个文件路径的列表。
感谢解答

运行环境:GPU需要多大的?

如果按照THUDM/ChatGLM-6B的说法,使用的GPU大小应该在13GB左右,但运行脚本后,占用了24GB还不够。

是因为langchain的RetrievalQA.from_chain_type 这个函数调用的原因还是 导入的HuggingFaceEmbeddings太大?

希望能提供模型运行时的环境配置信息,非常感谢!

Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████| 8/8 [00:09<00:00, 1.19s/it]
Traceback (most recent call last):
File "/home/node6/liujie/workspace/langchain-ChatGLM/knowledge_based_chatglm.py", line 11, in
from chatglm_llm import ChatGLM
File "/home/node6/liujie/workspace/langchain-ChatGLM/chatglm_llm.py", line 9, in
class ChatGLM(LLM):
File "pydantic/main.py", line 221, in pydantic.main.ModelMetaclass.new
File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.init
File "pydantic/fields.py", line 546, in pydantic.fields.ModelField.prepare
File "pydantic/fields.py", line 570, in pydantic.fields.ModelField._set_default_and_type
File "pydantic/fields.py", line 439, in pydantic.fields.ModelField.get_default
File "pydantic/utils.py", line 693, in pydantic.utils.smart_deepcopy
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 270, in _reconstruct
state = deepcopy(state, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 230, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 296, in _reconstruct
value = deepcopy(value, memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/copy.py", line 153, in deepcopy
y = copier(memo)
File "/home/node6/py_env/anaconda3/lib/python3.9/site-packages/torch/nn/parameter.py", line 55, in deepcopy
result = type(self)(self.data.clone(memory_format=torch.preserve_format), self.requires_grad)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.53 GiB total capacity; 22.80 GiB already allocated; 8.62 MiB free; 22.80 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

运行失败,Loading checkpoint未达到100%被kill了,请问下是什么原因?

日志如下:
python knowledge_based_chatglm.py
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 38%|██████▊ | 3/8 [00:02<00:03, 1.28it/s]Killed

为什么每次运行都会loading checkpoint

我把这个embeding模型下载到本地后,无法正常启动。
原代码每次运行代码都会提示这个
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:26<00:00, 13.11s/it]
/home/rgzn/miniconda3/envs/langchain/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py:192: UserWarning: ChatVectorDBChain is deprecated - please use from langchain.chains import ConversationalRetrievalChain
warnings.warn

爲啥最後還是報錯 哭。。

Failed to import transformers.models.t5.configuration_t5 because of the following error (look up to see
its traceback):
Failed to import transformers.onnx.config because of the following error (look up to see its traceback):
DLL load failed while importing _imaging: 找不到指定的模块。

效果如何优化

image

如图所示,将该项目的README.md和该项目结合后,回答效果并不理想,请问可以从哪些方面进行优化

报错Use `repo_type` argument if needed.

Traceback (most recent call last):
File "/home/zsd/langchain-ChatGLM/knowledge_based_chatglm.py", line 102, in
init_cfg(LLM_MODEL, EMBEDDING_MODEL, LLM_HISTORY_LEN)
File "/home/zsd/langchain-ChatGLM/knowledge_based_chatglm.py", line 46, in init_cfg
chatglm.load_model(model_name_or_path=llm_model_dict[LLM_MODEL])
File "/home/zsd/langchain-ChatGLM/chatglm_llm.py", line 52, in load_model
self.tokenizer = AutoTokenizer.from_pretrained(
File "/home/zsd/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 619, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/zsd/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 463, in get_tokenizer_config
resolved_config_file = cached_file(
File "/home/zsd/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "/home/zsd/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "/home/zsd/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/Users/liuqian/Downloads/ChatGLM-6B/chatglm-6b'. Use repo_type argument if needed.

对话到第二次的时候就报错UnicodeDecodeError: 'utf-8' codec can't decode

对话第一次是没问题的,模型返回输出后又给到请输入你的问题,我再输入问题就报错
File "/root/--2023/mon_Apr/langchain-ChatGLM/knowledge_based_chatglm.py", line 73, in
query = input("Input your question 请输入问题:")
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe5 in position 3: invalid continuation byte
请问这是什么问题呢?

What version of you are using?

Hi Panda, I saw the pip install -r requirements command in README, and want to confirm you are using python2 or python3? because my pip and pip3 version are all is 22.3.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.