Giter Club home page Giter Club logo

langchain-chinese-getting-started-guide's People

Contributors

asurinsaka avatar cloud2303 avatar eltociear avatar liaokongvfx avatar liugddx avatar pengwork avatar thiswind avatar zhaozhiming avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain-chinese-getting-started-guide's Issues

构建本地知识库问答机器人,超过max context length

您好,通过您的“构建本地知识库问答机器人”代码,上传了一个pdf文件,然后进行问答的时候,出现如下错误:
This model's maximum context length is 4097 tokens, however you requested 10844 tokens (10588 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

缺少相应的文件

在实战第四部分构建本地知识库那一块,文档中并不包含示例中所展示的科大讯飞的内容

希望继续更新

这是我看到写的最好的一篇 LangChain 中文文章,希望继续更新

尝试长文本总结时,最终输出结果为空

我在尝试长文本总结的示例时,文本分割和分割后的文本总结都是正常的,但是最终的合并总结输出的结果是空,代码是直接复制的示例代码,就加上了自己的apikey,求大佬解惑

`documents:1
documents:331

Entering new RefineDocumentsChain chain...

Entering new LLMChain chain...
Prompt after formatting:
Write a concise summary of the following:

"声明:本书为爱奇电子书(www.i7wu.cn)的用户上传至其在本站的存储空间,本站只提供TXT全集电子书存储服务以及免费下载服务,以下作品内容之版权与本站无任何关系。

用户上传之内容开始

《地藏心经》

作者:铸剑师无名

正文

第一第十五章 天下势,渡江(一)

“渝州陆家?!”

虽然原本的那个秦逸,每日只知道苦读诗书,从未与商贾们打过交道,但是渝州陆家的名声,他还是知道。

陆家三代为官,官至两江总督,五代经商,百年经营,家私何止千万,直至今朝,俨然已是江南一等士族大户。渝州陆氏以皮货起家,乃是西北之地数得上号的商户,西北之地所产的皮货,有三成经他们之手卖往江南。

若只是如此,陆氏也不过是一头肥硕的羔羊,只待他人宰杀。"

CONCISE SUMMARY:

Finished chain.

Entering new LLMChain chain...
Prompt after formatting:
Your job is to produce a final summary
We have provided an existing summary up to a certain point:
本书是爱奇电子书提供的免费TXT全集电子书,其中的内容是关于铸剑师无名的《地藏心经》,其中描述了渝州陆家三代为官,五代经商,百年经营,家私何止千万,是江南一等士族大户的故事。
We have the opportunity to refine the existing summary(only if needed) with some more context below.


陆氏三代家主都极具雄韬伟略,以千金买官,以万金开路,更是在蛮夷南侵之时,倾尽家资招兵买马,拒十万蛮夷铁骑于侯关外,短短三年间,便一手扶持起了都护大将军——苏和,抗夷大将军——邓昌。

以姻亲握住兵权后,陆氏子弟一路仕途平坦,百年来,人才辈出,更有陆云,陆羽等良将贤才。

而今,已是雄踞渝、豫两地的世家阀门,这江南数万水军,便是掌握在这一代的陆家族长手中。

朝廷无权,皇帝无兵,短短十年,南朝便形同虚设,各地封疆大使,世家阀门手握重兵,除了京都三省还在南朝皇族手中,其他俨然已经分地而治。

西北,邓、李、苏、何、公孙五家世家阀门割据一方,联手共抗蛮夷合并后的金国。

南方,陆、熊、刘、郑四家百年士族据守江南,与中山国相持已然数十载。

东方,京都三省雄兵三十万,黑甲铁骑八千,时刻防范着秦国有所异动。(备注:黑甲铁骑配备长枪,马刀,黑铁重甲,所乘骑的乃是西域宛马,是南朝立国时便赫赫有名的百战铁骑。曾以八千黑甲铁骑破中山国十万雄兵而名动天下。)

这些,便是张狂融合完原本那个‘秦逸’的记忆,而整理出的天下大势。

Given the new context, refine the original summary
If the context isn't useful, return the original summary.

Finished chain.

Entering new LLMChain chain...
Prompt after formatting:
Your job is to produce a final summary
We have provided an existing summary up to a certain point:

本书是爱奇电子书提供的免费TXT全集电子书,其中的内容是关于铸剑师无名的《地藏心经》,其中描述了渝州陆家三代为官,五代经商,百年经营,家私何止千万,是江南一等士族大户的故事,以及他们掌握兵权,一路仕途平坦,百年来人才辈出,更有陆云,陆羽等
良将贤才,形成了当时西北,邓、李、苏、何、公孙五家世家阀门割据一方的形势,南方陆、熊、刘、郑四家百年士族据守江南,与中山国相持,东方京都三省雄兵三十万,黑甲铁骑八千,防范着秦国的动向。
We have the opportunity to refine the existing summary(only if needed) with some more context below.

“少爷。这船都被陆家车行的人包下了。”

不过一会儿,秦汉便略显沮丧地走了回来。渝州陆家势大,而今就连附属下面的陆家车行,身份也是水涨船高。自从秦逸父亲病逝后,秦家家道中落,与陆家比不得,况且此地也并非西北所属,秦家纵然还有些人脉,却也用不上。

所以,为了避免麻烦,他也没敢去与陆家争船。

“嗯。”秦逸默然,脸色平静,对着秦汉点点头,也未多说些什么。虽然他心中也想早点赶往渝州,在年关前,布置些家业,好早些安定下来。“我知道了。”

“敢问公子贵姓?”

这时,秦逸身旁的中年商人,突然出口问道。原来他见秦逸,ww W.l6K .cN面容俊逸,又是一身锦衣华服,虽然风尘仆仆,但是谈吐举止中,无一不带着士族风范,不由得起了巴结之心。

南朝商人地位虽然要略高于前朝列代,但是依旧排在最后。“士农工商”,商人自古就有着“不劳而获”之名。

Given the new context, refine the original summary
If the context isn't useful, return the original summary.

Finished chain.

Entering new LLMChain chain...
Prompt after formatting:
Your job is to produce a final summary
We have provided an existing summary up to a certain point:

本书是爱奇电子书提供的免费TXT全集电子书,其中的内容是关于渝州陆家三代为官,五代经商,百年经营,家私何止千万,是江南一等士族大户的故事,以及他们掌握兵权,一路仕途平坦,百年来人才辈出,更有陆云,陆羽等良将贤才,形成了当时西北,邓、李、苏
、何、公孙五家世家阀门割据一方的形势,南方陆、熊、刘、郑四家百年士族据守江南,与中山国相持,东方京都三省雄兵三十万,黑甲铁骑八千,防范着秦国的动向。秦家因此面临着渝州陆家的势力,无法争取船只返回渝州,但是他们的士族风范仍然被南朝商人所认
可。
We have the opportunity to refine the existing summary(only if needed) with some more context below.

“姓秦。”秦逸面色淡然,转头看了中年商人一眼,出声道。

他来于后世,对商人并无轻视之意,所以也没有摆什么士族的架子。

中年商人闻言微微一愣,随即动容,隐隐带着喜悦,他躬腰低头,对着秦逸恭恭敬敬地行了一个大礼,而后出声询问道:“敢问可是晋中秦家?!”

“正是!”说话的确是秦汉,秦家在西北之地声名远播,善名百里,虽然手中无兵无权,但是在西北士族中还是举足轻重,俨然已成精神领袖。

“敢问,可是秦逸公子?!”中年商人对着秦逸又是一个大礼,声音颇为颤抖地说道。此番回程,他便听说了秦家少爷要前往渝州,却想不到自己居然正好遇上!

“五代行善,何其不易!夫天下之人,独晋中秦家也!”……

秦家善名,至今已然百年有余。

“嗯。”秦逸点头,并未多说。一路行来,他已经陆续感受到了秦家在这个世界上的声望。

一世行善容易,但是五代行善,中原数千年来,独此一家。就连数十年前,蛮夷赫连氏族入侵中原,都刻意避开了晋中秦家。在草原蛮族的教义中,屠戮真正的善人,会被狼神抛弃,灵魂永世不得安息。

Given the new context, refine the original summary
If the context isn't useful, return the original summary.

Finished chain.

Entering new LLMChain chain...
Prompt after formatting:
Your job is to produce a final summary
We have provided an existing summary up to a certain point:

本书是爱奇电子书提供的免费TXT全集电子书,其中的内容是关于渝州陆家三代为官,五代经商,百年经营,家私何止千万,是江南一等士族大户的故事,以及他们掌握兵权,一路仕途平坦,百年来人才辈出,更有陆云,陆羽等良将贤才,形成了当时西北,邓、李、苏
、何、公孙五家世家阀门割据一方的形势,南方陆、熊、刘、郑四家百年士族据守江南,与中山国相持,东方京都三省雄兵三十万,黑甲铁骑八千,防范着秦国的动向。秦家因此面临着渝州陆家的势力,无法争取船只返回渝州,但是他们的士族风范仍然被南朝商人所认
可,五代行善,秦家百年声望远播,在西北士族中占据举足轻重的地位,即使没有兵权,也成为了精神领袖,因此草原蛮族入侵时也刻意避开了晋中秦家。
We have the opportunity to refine the existing summary(only if needed) with some more context below.

就在秦逸准备寻一处清净地,安安静静的等待陆家车行的人先走时,远处,一团人簇拥着一个青衫老者往这边走来。而为首的,正是昨日在路上遇到的那个满脸扎须的壮年汉子。

“那便是陆家车行的管事。”一旁的中年商人适时的报出了那位青衫老者的身份。

“陆氏车行?管事?”

秦逸眉头一挑,不由得心头一动。若是等到陆家车行货物运完,这一来一去,天怕是已经摸黑了,想来渡江只能等到明晚。既然面前,就是陆家车行的管事,何不找他试试,看看能不能一并登船渡江。

想到这,秦逸略微整了整衣衫,脸上挂着一副淡定的笑容,迎了上去。

“长者有礼了!”秦逸走到人群前,对着为首的青衫老者微微一拱手,行礼道。

Given the new context, refine the original summary
If the context isn't useful, return the original summary.

Finished chain.

Finished chain.`

对超长文本总结代码 返回的主要是英文o.o

'\n\nA summary of the book "地藏心经" reveals that it was uploaded by a user on the website i7wu.cn, which only provides storage and free download services for the complete TXT version of the book. The protagonist, Qin Yi, is from the prominent 晋中秦家 and is trying to establish his family's business in 渝州. However, he is unable to compete with the powerful 渝州陆家 family and decides to avoid conflict. While waiting for the caravan of the 渝州陆家 family to pass before finding a peaceful place, Qin Yi is approached by a middle-aged merchant who is impressed by his noble demeanor. The merchant, upon learning that Qin Yi is from the 晋中秦家, shows great respect and asks for his name. Qin Yi then decides to seek the help of the head of the 渝州陆家 family's transportation department in order to cross the river and continue his journey.'

langchain能做基于DB的text2sql吗?

目前基于langchain比较惊艳的使用案例有chatPdf和chatExcel,chatExcel能做基于DS的大数据集(1000万行数据)的text2sql吗?
以及目前还有哪些比较经典的langchain的应用案例,可以分享一下吗?

最后,感谢您的这个项目,这是一个非常赞的项目!

转化为document对象出错

调试代码为document = loader.load()
一直报一下错误:
Exception has occurred: ImportError
cannot import name 'open_filename' from 'pdfminer.utils' (G:\Anaconda\anaconda3\envs\longchain\lib\site-packages\pdfminer\utils.py)
File "G:\llm_py\gcd_qa.py", line 11, in
documents = loader.load()
ImportError: cannot import name 'open_filename' from 'pdfminer.utils' (G:\Anaconda\anaconda3\envs\longchain\lib\site-packages\pdfminer\utils.py)

您好 有个问题

非常感谢你的帮助 很有用 我遇到了一个问题 docs = pinecone.similarity_search(prompt, include_metadata=True, k=2) 我这边如果k为默认4的话 有的时候搜索会就报错 This model's maximum context length is 4097 tokens. k设置2不会有问题 我不知道为什么会出现这个情况

pinecone.init(
api_key="xxxxxxx",
environment="xxxxx"
)
index_name = "xxxxxx"
os.environ["OPENAI_API_KEY"] = "XXXXXXXXX"

embeddings = OpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY'])
pinecone = Pinecone.from_existing_index(index_name, embeddings)
openAI = ChatOpenAI(temperature=0.3, model_name='gpt-3.5-turbo', openai_api_key=os.environ['OPENAI_API_KEY'])
chain = load_qa_chain(openAI, chain_type="stuff")

def askGPT(prompt):
docs = pinecone.similarity_search(prompt, include_metadata=True, k=4)
# 使用langchain的load_qa_chain函数加载一个问答链
ch = chain.run(input_documents=docs, question=prompt)
return ch

这是我的代码 当k为默认值4的时候会导致报错

接口老旧

目前代码很多warning
UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI

可以稍微更新一些api接口代码

前面几个例子:一问一答,谷歌搜索,问答机器人等可以用 GPT3.5 吗?

前面几个例子:一问一答,谷歌搜索,总结,问答机器人等可以用 GPT3.5 模型吗?

比如在前面的几个例子中, llm 如果改成这样返回也不会报错 :

ChatOpenAI(temperature=0.1,max_tokens=2048)

效果似乎也好了一些,看文档介绍应该可以,但不知道是不是真的用的 GPT3.5 模型。

以下摘自文档:

LLM 调用

支持多种模型接口,比如 OpenAI、Hugging Face、AzureOpenAI ...

对超长文本进行总结的例子,执行时报错

langchain version:v0.0.157

错误行:chain.run(split_documents[:5])

错误信息

Traceback (most recent call last):
File "/main.py", line 56, in
chain.run(split_documents[:5])
File "/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 238, in run
return self(args[0], callbacks=callbacks)[self.output_keys[0]]
File "/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in call
raise e
File "/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in call
self._call(inputs, run_manager=run_manager)
File "/venv/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 84, in _call
output, extra_return_dict = self.combine_docs(
File "/venv/lib/python3.9/site-packages/langchain/chains/combine_documents/refine.py", line 94, in combine_docs
res = self.initial_llm_chain.predict(callbacks=callbacks, **inputs)
File "/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 213, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 142, in call
raise e
File "/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 136, in call
self._call(inputs, run_manager=run_manager)
File "/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 69, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 79, in generate
return self.llm.generate_prompt(
File "/venv/lib/python3.9/site-packages/langchain/llms/base.py", line 127, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
File "/venv/lib/python3.9/site-packages/langchain/llms/base.py", line 176, in generate
raise e
File "/venv/lib/python3.9/site-packages/langchain/llms/base.py", line 170, in generate
self._generate(prompts, stop=stop, run_manager=run_manager)
File "/venv/lib/python3.9/site-packages/langchain/llms/openai.py", line 306, in _generate
response = completion_with_retry(self, prompt=_prompts, **params)
File "/venv/lib/python3.9/site-packages/langchain/llms/openai.py", line 106, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/venv/lib/python3.9/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/venv/lib/python3.9/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "/venv/lib/python3.9/site-packages/tenacity/init.py", line 314, in iter
return fut.result()
File "/usr/local/Cellar/[email protected]/3.9.14/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/Cellar/[email protected]/3.9.14/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/venv/lib/python3.9/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
File "/venv/lib/python3.9/site-packages/langchain/llms/openai.py", line 104, in _completion_with_retry
return llm.client.create(**kwargs)
File "/venv/lib/python3.9/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/Users/liangchen/PycharmProjects/langchain/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 149, in create
) = cls.__prepare_create_request(
File "/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 113, in __prepare_create_request
url = cls.class_url(engine, api_type, api_version)
File "/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 37, in class_url
raise error.InvalidRequestError(
TypeError: init() missing 1 required positional argument: 'param'

本地知识库相关度问题

一:是否可返回 answer 引用的文档相关度
二:是否可设置相关度阈值? 大于该阈值不返回

知识库文档机器人咨询

想做一个知识库文档机器人,但是知识库文档相对比较庞大,载入之后存储向量数据库,使用embeddings时,总是会超出Token限制,如何更好的解决这个问题呢

加载问答对数据进行问答

请教一个问题:
我们的数据库中存库的是问答对:pair<question, answer>,构建document的时候我只想对question进行。
通过embedding检索到question之后取出对应的answer再进行后续的处理。
请问这种数据应该如何加载啊?

询问关于 ChatGPT 插件相关问题

image
我看了一下 langchain 官网上的关于 ChatGPT 插件的例子:
tool = AIPluginTool.from_plugin_url("https://www.klarna.com/.well-known/ai-plugin.json")
它主要有一行这个内容,我去 openai 的插件部分的文档上找了一下,没找到其他可以填的 url ,请问博主你知道还可以写哪些其他的 url 嘛

加载youtub视频的例子,原样复制代码本地执行,第二个提问之后就报错了

Using embedded DuckDB without persistence: data will be transient
问题:这个视频主要讲了啥
这个视频主要展示了 Unreal Engine 5.2 引擎的一些新实验性功能,包括支持植被渲染、物理模拟、流体模拟、材质框架等。视频中展示了一个真实感极强的环境,包括一辆 Rivian R1T 电动皮卡车,通过引擎的实时渲染技术和物理模拟技术,展示了车辆的悬挂、轮胎变形、声音合成等效果。视频还介绍了引擎内置的一套实验性的程序化内容生成工具,可以帮助艺术家快速生成场景中的元素,并且可以与其他程序化元素进行交互。
问题:植被渲染有哪些场景
Traceback (most recent call last):
File "load_ytb.py", line 67, in
result = qa({'question': question, 'chat_history': chat_history})
File "/Users/xx/.pyenv/versions/3.8.6/lib/python3.8/site-packages/langchain/chains/base.py", line 140, in call
raise e
File "/Users/xx/.pyenv/versions/3.8.6/lib/python3.8/site-packages/langchain/chains/base.py", line 134, in call
self._call(inputs, run_manager=run_manager)
File "/Users/xx/.pyenv/versions/3.8.6/lib/python3.8/site-packages/langchain/chains/conversational_retrieval/base.py", line 101, in _call
new_question = self.question_generator.run(
File "/Users/xx/.pyenv/versions/3.8.6/lib/python3.8/site-packages/langchain/chains/base.py", line 239, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
File "/Users/xx/.pyenv/versions/3.8.6/lib/python3.8/site-packages/langchain/chains/base.py", line 123, in call
inputs = self.prep_inputs(inputs)
File "/Users/xx/.pyenv/versions/3.8.6/lib/python3.8/site-packages/langchain/chains/base.py", line 216, in prep_inputs
self._validate_inputs(inputs)
File "/Users/xx/.pyenv/versions/3.8.6/lib/python3.8/site-packages/langchain/chains/base.py", line 83, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'context'}

第一个例子报错了呢

第一个例子就报错了呢?
代码:
import os

os.environ["OPENAI_API_KEY"] = "sk-aaa"
os.environ["SERPAPI_API_KEY"] = "aaaa"

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.agents import AgentType

llm = OpenAI(temperature=0, max_tokens=2048)

tools = load_tools(["serpapi"])

agent = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)

agent.run("What's the date today? What great events have taken place today in history?")

报错:
Traceback (most recent call last):
File "/Users/one/OpenAI/pythonProject3/main.py", line 24, in
agent = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)
File "/Users/one/OpenAI/pythonProject3/venv/lib/python3.9/site-packages/langchain/agents/initialize.py", line 52, in initialize_agent
agent_obj = agent_cls.from_llm_and_tools(
File "/Users/one/OpenAI/pythonProject3/venv/lib/python3.9/site-packages/langchain/agents/agent.py", line 506, in from_llm_and_tools
cls._validate_tools(tools)
File "/Users/one/OpenAI/pythonProject3/venv/lib/python3.9/site-packages/langchain/agents/self_ask_with_search/base.py", line 34, in _validate_tools
raise ValueError(
ValueError: Tool name should be Intermediate Answer, got {'Search'}

自加载的训练数据太长了,报错如何解决? This model's maximum context length is 4097 tokens, however you requested 11552 tokens

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 11552 tokens (3458 in your prompt; 8094 for the completion). Please reduce your prompt; or completion length.

loader = DirectoryLoader('/Users/geektime_chatgpt/llchain_input_data', glob='**/*.txt')

/Users/geektime_chatgpt/llchain_input_data 目录下加载一个训练数据,文本大小也就 4.7k 就报错了,请问有人知道如何解决么?🥹

VectorDBQA deprecated

构建本地知识库问答机器人中使用了VectorDBQA,这个类已经废弃了,作者可以替换成RetrievalQA,代码类似这样:

from langchain.chains import RetrievalQA
llm = OpenAI(temperature=0)
# 创建问答对象
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever())

加一个流式响应例子

StreamingStdOutCallbackHandler只能在控制台流失输出。当封装chain给其他方法调用时并不会流式响应

关于【构建向量索引数据库】示例的报错,提示:InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 23694 tokens (23438 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

我是小白一枚,感谢提供的langchain的中文文档,帮助我入门学习。

我是按照文档的案例逐个进行本地测试的

当【构建向量索引数据库】时,按照文中提示的

from langchain.vectorstores import Chroma
# 持久化数据
docsearch = Chroma.from_documents(documents, embeddings, persist_directory="D:/vector_store")
docsearch.persist()
# 加载数据
docsearch = Chroma(persist_directory="D:/vector_store", embedding_function=embeddings)

然后再【构建本地知识库问答机器人】的示例代码的基础上进行修改的,以下为部分内容:

# 加载文件夹中的所有txt类型的文件
loader = DirectoryLoader('/Users/ldx/Documents/data/document', glob='**/*.*')
# 将数据转成 document 对象,每个文件会作为一个 document
documents = loader.load()

# 初始化加载器
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
# 切割加载的 document
split_docs = text_splitter.split_documents(documents)

# 持久化数据
docsearch = Chroma.from_documents(documents, embeddings, persist_directory="/Users/ldx/Documents/data/chroma_data")
docsearch.persist()

# 加载数据
docsearch = Chroma(persist_directory="/Users/ldx/Documents/data/chroma_data", embedding_function=OpenAIEmbeddings())

# 创建问答对象
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), return_source_documents=True)
# 进行问答,问答内容省略
result = qa({"query": ".............."})
print(result)

然后就报错了!!!
调试了好久,也没成功,后来仔细对比了一下【构建本地知识库问答机器人】示例,发现
docsearch = Chroma.from_documents(split_docs, embeddings)
传入的是split_docs,切割后的document,而不是原始的document.
修改后,
docsearch = Chroma.from_documents(documents, embeddings, persist_directory="/Users/ldx/Documents/data/chroma_data") 可以正常运行了。

看来还是自己粗心,记录一下,万一也有人遇到呢。

一次提问后,agent可以反复调用多个tool吗?

现在感觉每次问问题,就只使用一个tool,有办法像autogpt一样,多次递归调用然后求出最优解吗?
例如:先检索向量数据库,然后谷歌搜索,然后再检索限量数据库,然后读取mysql数据库,然后总结答案。
能方便给一个这样的demo例子吗?往上找不到任何这种相关的例子

openai api rate limits

想請問一下,我在跑 "构建本地知识库问答机器人" 時,會碰到以下問題:
You exceeded your current quota, please check your plan and billing details.
這是代表我要花錢才能跑嗎?我丟的檔案是很小的 txt 檔。

謝謝!

本地问答知识库的问题

我电脑里有100万个pdf,然后按照操作说明,把csv形式的书名+作者+出版日期的内容加载了进去,然后进行提问,比如某某书名的作者是谁,回答还是胡说八道,是因为openai的token限制导致他实际没办法学到我给的百万级内容吗

tiktoken dependency

Need to add pip install tiktoken from 装包以及初始化 or else it will throw from ipynb in step 构建本地知识库问答机器人

No module named 'tiktoken'

openai rate limits

想請問一下,我在跑 "构建本地知识库问答机器人" 時,會碰到以下問題:
You exceeded your current quota, please check your plan and billing details.
這是代表我要花錢才能跑嗎?我丟的檔案是很小的 txt 檔。

謝謝!

在dolly-3B模型使用自己的數據集無法輸出正確的回覆

目前我想用databricks 開源的Dolly模型做出個可以針對我給的數據集內的問題給專業知識的回覆的機器人
image
數據庫裡大概都是這樣的教你步驟去解決問題
我想要用langchain去實現 不過我實際使用後發現回覆的不是我要的答案
這是我的code

from langchain.embeddings import HuggingFaceEmbeddings
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch, Pinecone, Weaviate, FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import HuggingFacePipeline
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from langchain.prompts import PromptTemplate
import torch

hf_embed = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/My Drive/"
reader = PdfReader('/content/gdrive/My Drive/data/operation Manual.pdf')
raw_text = ''
for i, page in enumerate(reader.pages):
    text = page.extract_text()
    if text:
        raw_text += text
text_splitter = CharacterTextSplitter(        
    separator = "\n",
    chunk_size = 1000,
    chunk_overlap  = 200,
    length_function = len,
)
texts = text_splitter.split_text(raw_text)
docsearch = FAISS.from_texts(texts, hf_embed)
model_name = "databricks/dolly-v2-3b"
instruct_pipeline = pipeline(model=model_name, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", 
                               return_full_text=True, max_new_tokens=256, top_p=0.95, top_k=50)
hf_pipe = HuggingFacePipeline(pipeline=instruct_pipeline)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.

{context}

Question: {question}
"""
PROMPT = PromptTemplate(
    template=prompt_template, input_variables=["context", "question"]
)
query = "I forgot my login password."          
docs = docsearch.similarity_search(query)
chain = load_qa_chain(llm = hf_pipe, chain_type="stuff", prompt=PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)

不只回覆的答案不是我數據集中的解答這個問題,產生回覆的速度也需要花費大約2.5個小時
不知道我在哪個步驟使用錯誤了
求大老幫忙解答一下 謝謝!

通过pinecone构建个人知识库token超限是什么原因?

基本情况描述

代码基本和博主一致,pinecone新申请的,模式只能选starter不知道是否有影响。

embeddings=OpenAIEmbeddings()
# 持久化数据
# docsearch = Pinecone.from_texts([t.page_content for t in split_docs], embeddings, index_name=index_name)
# 加载数据
docsearch = Pinecone.from_existing_index(index_name, embeddings)

这里开始持久化数据我启用了语句,pincone里的vector也是正常在增加,运行一次大概增加40多个,后面实施单纯加载数据,报错基本一致,就是每次运行token数量都会变化:

报错信息

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 7066 tokens (6810 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

使用GPT3.5模型构建油管频道问答机器人 这个demo里面的system_template 里面的${content}是不是写错了,应该是${question},提问后会报错 ValueError: Missing some input keys: {'context'}

使用GPT3.5模型构建油管频道问答机器人 这个demo里面的system_template 里面的${content}是不是不对,

问题:水蓝心的妈妈是谁
Traceback (most recent call last):
File "/Users/knight/workspace/sourceTree/mobvoi/aitoc_competent/api_lc/app/example/7_lc.py", line 72, in
result = qa({'question': question, 'chat_history': chat_history, 'context': context})
File "/Users/knight/miniconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in call
raise e
File "/Users/knight/miniconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in call
self._call(inputs, run_manager=run_manager)
File "/Users/knight/miniconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 101, in _call
new_question = self.question_generator.run(
File "/Users/knight/miniconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/base.py", line 239, in run
return self(kwargs, callbacks=callbacks)[self.output_keys[0]]
File "/Users/knight/miniconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/base.py", line 123, in call
inputs = self.prep_inputs(inputs)
File "/Users/knight/miniconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/base.py", line 216, in prep_inputs
self._validate_inputs(inputs)
File "/Users/knight/miniconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/base.py", line 83, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'context'}

Process finished with exit code 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.