现在让我们看一下使用稍微复杂的内存类型 - ConversationSummaryMemory
。这种类型的记忆会随着时间的推移创建对话的摘要。这对于随着时间的推移压缩对话中的信息非常有用。对话摘要内存对发生的对话进行总结,并将当前摘要存储在内存中。然后可以使用该内存将迄今为止的对话摘要注入提示/链中。此内存对于较长的对话最有用,因为在提示中逐字保留过去的消息历史记录会占用太多令牌。
我们首先来探讨一下这种存储器的基本功能。
示例代码,
from langchain.memory import ConversationSummaryMemory, ChatMessageHistory
from langchain.llms import OpenAI
memory = ConversationSummaryMemory(llm=OpenAI(temperature=0))
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.load_memory_variables({})
输出结果,
{'history': '\nThe human greets the AI, to which the AI responds.'}
我们还可以获取历史记录作为消息列表(如果您将其与聊天模型一起使用,这非常有用)。
memory = ConversationSummaryMemory(llm=OpenAI(temperature=0), return_messages=True)
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.load_memory_variables({})
输出结果,
{'history': [SystemMessage(content='\nThe human greets the AI, to which the AI responds.', additional_kwargs={})]}
我们也可以直接使用 predict_new_summary 方法。
messages = memory.chat_memory.messages
previous_summary = ""
memory.predict_new_summary(messages, previous_summary)
输出结果,
'\nThe human greets the AI, to which the AI responds.'
Initializing with messages
如果您有此类之外的消息,您可以使用 ChatMessageHistory 轻松初始化该类。加载期间,将计算摘要。
示例代码,
history = ChatMessageHistory()
history.add_user_message("hi")
history.add_ai_message("hi there!")
memory = ConversationSummaryMemory.from_messages(llm=OpenAI(temperature=0), chat_memory=history, return_messages=True)
memory.buffer
输出结果,
'\nThe human greets the AI, to which the AI responds with a friendly greeting.'
Using in a chain
让我们看一下在链中使用它的示例,再次设置 verbose=True
以便我们可以看到提示。
示例代码,
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
llm = OpenAI(temperature=0)
conversation_with_summary = ConversationChain(
llm=llm,
memory=ConversationSummaryMemory(llm=OpenAI()),
verbose=True
)
conversation_with_summary.predict(input="Hi, what's up?")
输出结果,
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi, what's up?
AI:
> Finished chain.
" Hi there! I'm doing great. I'm currently helping a customer with a technical issue. How about you?"
示例代码,
conversation_with_summary.predict(input="Tell me more about it!")
输出结果,
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue.
Human: Tell me more about it!
AI:
> Finished chain.
" Sure! The customer is having trouble with their computer not connecting to the internet. I'm helping them troubleshoot the issue and figure out what the problem is. So far, we've tried resetting the router and checking the network settings, but the issue still persists. We're currently looking into other possible solutions."
示例代码,
conversation_with_summary.predict(input="Very cool -- what is the scope of the project?")
输出结果,文章来源:https://www.toymoban.com/news/detail-610106.html
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
The human greeted the AI and asked how it was doing. The AI replied that it was doing great and was currently helping a customer with a technical issue where their computer was not connecting to the internet. The AI was troubleshooting the issue and had already tried resetting the router and checking the network settings, but the issue still persisted and they were looking into other possible solutions.
Human: Very cool -- what is the scope of the project?
AI:
> Finished chain.
" The scope of the project is to troubleshoot the customer's computer issue and find a solution that will allow them to connect to the internet. We are currently exploring different possibilities and have already tried resetting the router and checking the network settings, but the issue still persists."
完结!文章来源地址https://www.toymoban.com/news/detail-610106.html
到了这里,关于Langchain 的 Conversation summary memory的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!