【LLM】Langchain使用[二](模型链)

这篇具有很好参考价值的文章主要介绍了【LLM】Langchain使用[二](模型链)。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

1. SimpleSequentialChain

  • 场景:一个输入和一个输出
from langchain.chat_models import ChatOpenAI    #导入OpenAI模型
from langchain.prompts import ChatPromptTemplate   #导入聊天提示模板
from langchain.chains import LLMChain    #导入LLM链。
from langchain.chains import SimpleSequentialChain

llm = ChatOpenAI(temperature=0.9, openai_api_key = api_key)

# 提示模板 1 :这个提示将接受产品并返回最佳名称来描述该公司
first_prompt = ChatPromptTemplate.from_template(
    "What is the best name to describe \
    a company that makes {product}?"
)
# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)

# 提示模板 2 :接受公司名称,然后输出该公司的长为20个单词的描述
second_prompt = ChatPromptTemplate.from_template(
    "Write a 20 words description for the following \
    company:{company_name}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)
# 组合两个chain,便于在一个步骤中含有该公司名字的公司描述
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],
                                             verbose=True)
product = "Queen Size Sheet Set"
overall_simple_chain.run(product)
# 结果: RegalRest Bedding
# RegalRest Bedding offers luxurious and comfortable mattresses and bedding accessories for a restful and rejuvenating sleep experience.

2. SequentialChain

  • 场景:多个输入和输出的时候
#子链1
# prompt模板 1: 翻译成英语(把下面的review翻译成英语)
first_prompt = ChatPromptTemplate.from_template(
    "Translate the following review to english:"
    "\n\n{Review}"
)
# chain 1: 输入:Review 输出: 英文的 Review
chain_one = LLMChain(llm=llm, prompt=first_prompt, 
                     output_key="English_Review"
                    )
     
#子链2
# prompt模板 2: 用一句话总结下面的 review
second_prompt = ChatPromptTemplate.from_template(
    "Can you summarize the following review in 1 sentence:"
    "\n\n{English_Review}"
)
# chain 2: 输入:英文的Review   输出:总结
chain_two = LLMChain(llm=llm, prompt=second_prompt, 
                     output_key="summary"
                    )               

#子链3
# prompt模板 3: 下面review使用的什么语言
third_prompt = ChatPromptTemplate.from_template(
    "What language is the following review:\n\n{Review}"
)
# chain 3: 输入:Review  输出:语言
chain_three = LLMChain(llm=llm, prompt=third_prompt,
                       output_key="language"
                      )

# prompt模板 4: 使用特定的语言对下面的总结写一个后续回复
fourth_prompt = ChatPromptTemplate.from_template(
    "Write a follow up response to the following "
    "summary in the specified language:"
    "\n\nSummary: {summary}\n\nLanguage: {language}"
)
# chain 4: 输入: 总结, 语言    输出: 后续回复
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
                      output_key="followup_message"
                     )
# 对四个子链进行组合
#输入:review    输出:英文review,总结,后续回复 
overall_chain = SequentialChain(
    chains=[chain_one, chain_two, chain_three, chain_four],
    input_variables=["Review"],
    output_variables=["English_Review", "summary","followup_message"],
    verbose=True
)
review = df.Review[5]
overall_chain(review)

结果如下,可以看到根据评论文本,子链1将文本翻译为英语,子链2将英文文本进行总结,子链3得到初始文本的语言,子链4对英文文本进行回复,并且是用初始语言。每个后面的子链可以利用前面链的outpu_key变量。

{'Review': "Je trouve le goût médiocre. La mousse ne tient pas, c'est bizarre. J'achète les mêmes dans le commerce et le goût est bien meilleur...\nVieux lot ou contrefaçon !?",

 'English_Review': "I find the taste mediocre. The foam doesn't hold, it's strange. I buy the same ones in stores and the taste is much better...\nOld batch or counterfeit!?",

 'summary': 'The reviewer is disappointed with the taste and foam quality, suspecting that the product might be either an old batch or a counterfeit 
version.',

 'followup_message': "Après avoir examiné vos commentaires, nous sommes désolés d'apprendre que vous êtes déçu par le goût et la qualité de la mousse de notre produit. Nous comprenons vos préoccupations et nous nous excusons pour tout inconvénient que cela a pu causer. Votre avis est précieux pour nous et nous aimerions enquêter davantage sur cette situation. Nous vous assurons que notre produit est authentique et fabriqué avec les normes les plus élevées de qualité. Cependant, nous examinerons attentivement votre spéculation selon laquelle il pourrait s'agir d'un lot ancien ou d'une contrefaçon. Veuillez nous fournir plus de détails sur le produit que vous avez acheté, y compris la date d'expiration et le code de lot, afin que nous puissions résoudre ce problème de manière appropriée. Nous vous remercions de nous avoir informés de cette situation et nous nous engageons à améliorer constamment notre produit pour répondre aux attentes de nos clients."}

3. 路由链 Router Chain

一个相当常见但基本的操作是根据输入将其路由到一条链,具体取决于该输入到底是什么。如果你有多个子链,每个子链都专门用于特定类型的输入,那么可以组成一个路由链,它首先决定将它传递给哪个子链,然后将它传递给那个链。

路由器由两个组件组成:

  • 路由器链本身(负责选择要调用的下一个链)
  • destination_chains:路由器链可以路由到的链

步骤:

  • 创建目标链:目标链是由路由链调用的链,每个目标链都是一个语言模型链
  • 创建默认目标链:这是一个当路由器无法决定使用哪个子链时调用的链。在上面的示例中,当输入问题与物理、数学、历史或计算机科学无关时,可能会调用它。
  • 创建LLM用于在不同链之间进行路由的模板
    • 注意:此处在原教程的基础上添加了一个示例,主要是因为"gpt-3.5-turbo"模型不能很好适应理解模板的意思,使用 “text-davinci-003” 或者"gpt-4-0613"可以很好的工作,因此在这里多加了示例提示让其更好的学习。
      eg:
      << INPUT >>
      “What is black body radiation?”
      << OUTPUT >>
{{{{
    "destination": string \ name of the prompt to use or "DEFAULT"
    "next_inputs": string \ a potentially modified version of the original input
}}}}
  • 构建路由链:首先,我们通过格式化上面定义的目标创建完整的路由器模板。这个模板可以适用许多不同类型的目标。因此,在这里,可以添加一个不同的学科,如英语或拉丁语,而不仅仅是物理、数学、历史和计算机科学。
  • 从这个模板创建提示模板。最后,通过传入llm和整个路由提示来创建路由链。需要注意的是这里有路由输出解析,这很重要,因为它将帮助这个链路决定在哪些子链路之间进行路由。
  • 创建整体链路
	from langchain.chains import SequentialChain
	from langchain.chat_models import ChatOpenAI    #导入OpenAI模型
	from langchain.prompts import ChatPromptTemplate   #导入聊天提示模板
	from langchain.chains import LLMChain    #导入LLM链。
	from langchain.chains.router import MultiPromptChain  #导入多提示链
	from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
	from langchain.prompts import PromptTemplate
	
	# example4
	#第一个提示适合回答物理问题
	physics_template = """You are a very smart physics professor. \
	You are great at answering questions about physics in a concise\
	and easy to understand manner. \
	When you don't know the answer to a question you admit\
	that you don't know.
	
	Here is a question:
	{input}"""
	
	
	#第二个提示适合回答数学问题
	math_template = """You are a very good mathematician. \
	You are great at answering math questions. \
	You are so good because you are able to break down \
	hard problems into their component parts, 
	answer the component parts, and then put them together\
	to answer the broader question.
	
	Here is a question:
	{input}"""
	
	
	#第三个适合回答历史问题
	history_template = """You are a very good historian. \
	You have an excellent knowledge of and understanding of people,\
	events and contexts from a range of historical periods. \
	You have the ability to think, reflect, debate, discuss and \
	evaluate the past. You have a respect for historical evidence\
	and the ability to make use of it to support your explanations \
	and judgements.
	
	Here is a question:
	{input}"""
	
	
	#第四个适合回答计算机问题
	computerscience_template = """ You are a successful computer scientist.\
	You have a passion for creativity, collaboration,\
	forward-thinking, confidence, strong problem-solving capabilities,\
	understanding of theories and algorithms, and excellent communication \
	skills. You are great at answering coding questions. \
	You are so good because you know how to solve a problem by \
	describing the solution in imperative steps \
	that a machine can easily interpret and you know how to \
	choose a solution that has a good balance between \
	time complexity and space complexity. 
	
	Here is a question:
	{input}"""
	
	# 拥有了这些提示模板后,可以为每个模板命名,然后提供描述。
	# 例如,第一个物理学的描述适合回答关于物理学的问题,这些信息将传递给路由链,然后由路由链决定何时使用此子链。
	prompt_infos = [
	   {
	       "name": "physics",
	       "description": "Good for answering questions about physics",
	       "prompt_template": physics_template
	   },
	   {
	       "name": "math",
	       "description": "Good for answering math questions",
	       "prompt_template": math_template
	   },
	   {
	       "name": "History",
	       "description": "Good for answering history questions",
	       "prompt_template": history_template
	   },
	   {
	       "name": "computer science",
	       "description": "Good for answering computer science questions",
	       "prompt_template": computerscience_template
	   }
	]
	
	api_key = "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"
	llm = ChatOpenAI(temperature=0, openai_api_key = api_key)
	
	destination_chains = {}
	for p_info in prompt_infos:
	   name = p_info["name"]
	   prompt_template = p_info["prompt_template"]
	   prompt = ChatPromptTemplate.from_template(template=prompt_template)
	   chain = LLMChain(llm=llm, prompt=prompt)
	   destination_chains[name] = chain
	
	destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
	destinations_str = "\n".join(destinations)
	
	# 创建默认目标链
	default_prompt = ChatPromptTemplate.from_template("{input}")
	default_chain = LLMChain(llm=llm, prompt=default_prompt)
	
	# 创建LLM用于在不同链之间进行路由的模板
	MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
	language model select the model prompt best suited for the input. \
	You will be given the names of the available prompts and a \
	description of what the prompt is best suited for. \
	You may also revise the original input if you think that revising\
	it will ultimately lead to a better response from the language model.
	
	<< FORMATTING >>
	Return a markdown code snippet with a JSON object formatted to look like:
	```json
	{{{{
	   "destination": string \ name of the prompt to use or "DEFAULT"
	   "next_inputs": string \ a potentially modified version of the original input
	}}}}
	```
	
	REMEMBER: "destination" MUST be one of the candidate prompt \
	names specified below OR it can be "DEFAULT" if the input is not\
	well suited for any of the candidate prompts.
	REMEMBER: "next_inputs" can just be the original input \
	if you don't think any modifications are needed.
	
	<< CANDIDATE PROMPTS >>
	{destinations}
	
	<< INPUT >>
	{{input}}
	
	<< OUTPUT (remember to include the ```json)>>
	
	eg:
	<< INPUT >>
	"What is black body radiation?"
	<< OUTPUT >>
	```json
	{{{{
	   "destination": string \ name of the prompt to use or "DEFAULT"
	   "next_inputs": string \ a potentially modified version of the original input
	}}}}
	```
	
	"""
	
	# 构建路由链
	router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
	   destinations=destinations_str
	)
	router_prompt = PromptTemplate(
	   template=router_template,
	   input_variables=["input"],
	   output_parser=RouterOutputParser(),
	)
	
	router_chain = LLMRouterChain.from_llm(llm, router_prompt)
	
	# 整合多提示链
	chain = MultiPromptChain(router_chain=router_chain,    #l路由链路
	                        destination_chains=destination_chains,   #目标链路
	                        default_chain=default_chain,      #默认链路
	                        verbose=True
	                       )
	# 问题:什么是黑体辐射?
	response = chain.run("What is black body radiation?")
	print(response)

回答的结果为:

> Entering new MultiPromptChain chain...
physics: {'input': 'What is black body radiation?'}
> Finished chain.
Black body radiation refers to the electromagnetic radiation emitted by an object that absorbs all incident radiation and reflects or transmits none. It is called "black body" because it absorbs all wavelengths of light, appearing black at room temperature. 

According to Planck's law, black body radiation is characterized by a continuous spectrum of wavelengths and intensities, which depend on the temperature of the object. As the temperature increases, the peak intensity of the radiation shifts to shorter wavelengths, resulting in a change in color from red to orange, yellow, white, and eventually blue at very high temperatures.

Black body radiation is a fundamental concept in physics and has various applications, including understanding the behavior of stars, explaining the cosmic microwave background radiation, and developing technologies like incandescent light bulbs and thermal imaging devices.

Reference

[1] https://python.langchain.com/docs/modules/chains/文章来源地址https://www.toymoban.com/news/detail-588812.html

到了这里,关于【LLM】Langchain使用[二](模型链)的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包