背景
楼主决定提升与LLM交互的质量,之前是直接prompt->answer的范式,现在我希望能用上ReAct策略和能够检索StackOverflow,让同一款LLM发挥出更大的作用。
难点
1. 怎样调用StackOverflow
step1 pip install stackspi
step 2
from langchain.agents import load_tools
tools = load_tools(
["stackexchange"],
llm=llm
)
注:stackoverflow是stackexchange的子网站
2. 交互次数太多token输入超出了llm限制
approach 1 使用ConversationSummaryBufferMemory
这种记忆方式会把之前的对话内容总结一下,限制在设定的token个数内
from langchain.memory import ConversationSummaryBufferMemory
memory = ConversationSummaryBufferMemory(
llm = llm, # 这里的llm的作用是总结
max_token_limit=4097,
memory_key="chat_history"
)
approach 2 设置参数max_iterations
agent = ZeroShotAgent(
llm_chain=llm_chain,
tools=tools,
max_iterations=4, # 限制最大交互次数,防止token超过上限
verbose=True
)
3. llm总是回复无法回答
很多教程把温度设置成0,说是为了得到最准确的答案,但是我发现这样设置,agent会变得特别谨慎,直接说它不知道,温度调高以后问题解决了。
测试问题
What parts does a JUnit4 unit test case consist of?文章来源:https://www.toymoban.com/news/detail-845595.html
代码
from constants import PROXY_URL,KEY
import warnings
warnings.filterwarnings("ignore")
import langchain
langchain.debug = True
from langchain.agents import load_tools
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor, ZeroShotAgent
from langchain.chains import LLMChain
from langchain.memory import ConversationSummaryBufferMemory
llm = ChatOpenAI(
temperature=0.7, # 如果参数调得很低,会导致LLM特别谨慎,最后不给答案
model_name="gpt-3.5-turbo-0613",
openai_api_key=KEY,
openai_api_base=PROXY_URL
)
memory = ConversationSummaryBufferMemory(
llm = llm, # 这里的llm的作用是总结
max_token_limit=4097,
memory_key="chat_history"
)
prefix = """You should be a proficient and helpful assistant in java unit testing with JUnit4 framework. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
tools = load_tools(
["stackexchange"],
llm=llm
)
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
) # 这里集成了ReAct
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = ZeroShotAgent(
llm_chain=llm_chain,
tools=tools,
max_iterations=4, # 限制最大交互次数,防止token超过上限
verbose=True
)
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=memory
)
def ask_agent(question):
answer = agent_chain.run(input=question)
return answer
def main():
test_question = "What parts does a JUnit4 unit test case consist of?"
test_answer = ask_agent(test_question)
return test_answer
if __name__ == "__main__":
main()
最后输出
[chain/end] [1:chain:AgentExecutor] [75.12s] Exiting Chain run with output:
{
"output": "A JUnit4 unit test case consists of the following parts:\n1.
Test class: This is a class that contains the test methods.\n2. Test methods: These are the methods that contain the actual test code. They are annotated with the @Test annotation.\n3. Assertions: These are used to verify
the expected behavior of the code being tested. JUnit provides various assertion methods for this purpose.\n4. Annotations: JUnit provides several annotations that can be used to configure the test case, such as @Before, @After, @BeforeClass, and @AfterClass.\n\nOverall, a JUnit4 unit test case
is a class that contains test methods with assertions, and can be configured using annotations."
}文章来源地址https://www.toymoban.com/news/detail-845595.html
到了这里,关于LangChain Demo | 如何调用stackoverflow并结合ReAct回答代码相关问题的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!