LangChain构建大模型应用之Chain(四)
该篇是吴恩达关于基于 LangChain 的大语言模型应用开发系列教程的第四部分,主要介绍 LangChain 中的链(Chain),包括多种链类型的概念、使用场景和具体操作示例。
该篇是吴恩达关于基于 LangChain 的大语言模型应用开发系列教程的第四部分,主要介绍 LangChain 中的链(Chain),包括多种链类型的概念、使用场景和具体操作示例。
1.链的概念:链通常将大型语言模型(LLM)与提示相结合,能对文本或其他数据执行一系列操作,可同时处理多个输入。在LangChain中,链是一系列模型,它们被连接在一起以完成一个特定的目标。聊天机器人应用程序的链实例可能涉及使用LLM来理解用户输入,使用内存组件来存储过去的交互,以及使用决策组件来创建相关响应。
LangChain的chain模块是其框架中用于构建智能对话和任务式应用的核心组件之一,主要负责流程控制和数据传递。以下是chain模块的一些详细介绍:
流程控制:Chains是LangChain中的核心流程控制单元,它们负责串联不同的组件和步骤,定义应用程序的执行逻辑。
数据传递:Chains可以传递上下文和数据,使得不同的模块之间能够共享信息。
组合与嵌套:Chains支持嵌套和组合,可以构建复杂的流程,例如顺序执行、条件判断和循环等。
可重用性:Chains可以被定义为可重用的模块,在不同的应用场景中复用。
灵活性:LangChain支持多种类型的Chains,如简单链、索引链、对话链等,以满足不同的需求。
链的创建与组合:
单一链:开发者可以创建一个包含特定功能的单一链,例如文本预处理、模型推理等。
自定义链:利用内置的基础链类,开发者可以自定义链的输入、输出和处理逻辑。
顺序组合:将多个链按照执行顺序串联起来,前一个链的输出作为下一个链的输入。
并行组合:同时执行多个链,将它们的输出合并或选择性地使用。
嵌套链:在一个链的内部调用另一个链,实现更复杂的流程控制
核心链类型:
LLMChain:与大型语言模型(LLMs)直接交互的链,用于生成和理解自然语言
SimpleSequentialChain:一个简单的顺序执行链,用于按顺序执行一系列步骤
SequentialChain:一个顺序链,可以包含多个步骤,每个步骤可以是另一个链
RouterChain:用于智能路由决策,根据输入决定执行哪个链
TransformChain:用于数据处理,可以对输入数据进行转换或处理
通过这些链的组合和嵌套,LangChain框架能够实现复杂的自然语言处理应用程序,提供高度的扩展性和可维护性
2.LLM 链:是基础且强大的链类型。需导入 OpenAI 模型、聊天提示模板和 LLM 链;初始化语言模型和提示,将两者结合形成链。以产品名称生成公司名称为例,输入产品描述,通过链的运行,可得到对应公司名称。
3.顺序链 Sequential Chains
- 简单序列链 (SimpleSequentialChain):用于按顺序运行一系列链,每个子链只接受一个输入并返回一个输出。如先根据产品生成公司名称,再根据公司名称生成描述,前一个链的输出作为后一个链的输入。
import warnings
warnings.filterwarnings('ignore')
import os
from langchain.chat_models import ChatOpenAI # model
from langchain.prompts import ChatPromptTemplate # prompt
from langchain.chains import LLMChain # chain
llm = ChatOpenAI(
openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",
openai_api_key=os.getenv("DASHSCOPE_API_KEY"),
model_name="qwen-plus", # 模型名称
temperature=0.9
)
import pandas as pd
df = pd.read_csv('Data.csv')
df.head()
prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)
chain = LLMChain(llm=llm, prompt=prompt)
product = "Queen Size Sheet Set"
chain.run(product)
chain.invoke(product)
from langchain.chains import SimpleSequentialChain
llm = ChatOpenAI(
openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",
openai_api_key=os.getenv("DASHSCOPE_API_KEY"),
model_name="qwen-plus", # 模型名称
temperature=0.9
)
# prompt template 1
first_prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)
# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)
# prompt template 2
second_prompt = ChatPromptTemplate.from_template(
"Write a 20 words description for the following \
company:{company_name}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],
verbose=True
)
overall_simple_chain.invoke(product)
普通序列链(SequentialChain):可处理多个输入和输出的情况。创建多个链,如翻译评论、总结评论、检测评论语言、根据总结和语言生成回应,要确保各链输入键和输出键精确匹配。运行时,输入评论数据,可得到翻译后的评论、评论摘要以及用原始语言生成的回应等结果。
from langchain.chains import SequentialChain
llm = ChatOpenAI(
openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",
openai_api_key=os.getenv("DASHSCOPE_API_KEY"),
model_name="qwen-plus", # 模型名称
temperature=0.9
)
# prompt template 1: translate to english
first_prompt = ChatPromptTemplate.from_template(
"Translate the following review to english:"
"\n\n{Review}"
)
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt,
output_key="English_Review"
)
second_prompt = ChatPromptTemplate.from_template(
"Can you summarize the following review in 1 sentence:"
"\n\n{English_Review}"
)
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt,
output_key="summary"
)
# prompt template 3: translate to english
third_prompt = ChatPromptTemplate.from_template(
"What language is the following review:\n\n{Review}"
)
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,
output_key="language"
)
# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template(
"Write a follow up response to the following "
"summary in the specified language:"
"\n\nSummary: {summary}\n\nLanguage: {language}"
)
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
output_key="followup_message"
)
# overall_chain: input= Review
# and output= English_Review,summary, followup_message
overall_chain = SequentialChain(
chains=[chain_one, chain_two, chain_three, chain_four],
input_variables=["Review"],
output_variables=["English_Review", "summary","language","followup_message"],
verbose=True
)
review = df.Review[5]
overall_chain(review)
4.路由链(多提示链)(Router Chain):适用于根据输入内容将其路由到特定子链的场景。定义不同主题的提示模板,导入多提示链、LLM 路由器链和路由输出解析器;创建目的地链和默认链,定义路由模板,构建路由器链并组合形成总体链。提问时,若问题属于特定主题(如物理、数学等),会被路由到相应子链;若不属于任何子链相关主题,则会被传递到默认链。
physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise\
and easy to understand manner. \
When you don't know the answer to a question you admit\
that you don't know.
Here is a question:
{input}"""
math_template = """You are a very good mathematician. \
You are great at answering math questions. \
You are so good because you are able to break down \
hard problems into their component parts,
answer the component parts, and then put them together\
to answer the broader question.
Here is a question:
{input}"""
history_template = """You are a very good historian. \
You have an excellent knowledge of and understanding of people,\
events and contexts from a range of historical periods. \
You have the ability to think, reflect, debate, discuss and \
evaluate the past. You have a respect for historical evidence\
and the ability to make use of it to support your explanations \
and judgements.
Here is a question:
{input}"""
computerscience_template = """ You are a successful computer scientist.\
You have a passion for creativity, collaboration,\
forward-thinking, confidence, strong problem-solving capabilities,\
understanding of theories and algorithms, and excellent communication \
skills. You are great at answering coding questions. \
You are so good because you know how to solve a problem by \
describing the solution in imperative steps \
that a machine can easily interpret and you know how to \
choose a solution that has a good balance between \
time complexity and space complexity.
Here is a question:
{input}"""
prompt_infos = [
{
"name": "physics",
"description": "Good for answering questions about physics",
"prompt_template": physics_template
},
{
"name": "math",
"description": "Good for answering math questions",
"prompt_template": math_template
},
{
"name": "History",
"description": "Good for answering history questions",
"prompt_template": history_template
},
{
"name": "computer science",
"description": "Good for answering computer science questions",
"prompt_template": computerscience_template
}
]
from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate
llm = ChatOpenAI(
openai_api_base="https://dashscope.aliyuncs.com/compatible-mode/v1",
openai_api_key=os.getenv("DASHSCOPE_API_KEY"),
model_name="qwen-plus", # 模型名称
temperature=0.9
)
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
prompt_template = p_info["prompt_template"]
prompt = ChatPromptTemplate.from_template(template=prompt_template)
chain = LLMChain(llm=llm, prompt=prompt)
destination_chains[name] = chain
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)
MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
“”“
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
”“”
REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" if the input is not\
well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input \
if you don't think any modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT (remember to wrap the output with ```json (output)```)>>"""
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
destinations=destinations_str
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain, verbose=True
)
chain.invoke("What is black body radiation?")
chain.invoke("what is 2 + 2")
chain.invoke("Why does every cell in our body contain DNA?")
火山引擎开发者社区是火山引擎打造的AI技术生态平台,聚焦Agent与大模型开发,提供豆包系列模型(图像/视频/视觉)、智能分析与会话工具,并配套评测集、动手实验室及行业案例库。社区通过技术沙龙、挑战赛等活动促进开发者成长,新用户可领50万Tokens权益,助力构建智能应用。
更多推荐
所有评论(0)