使用函数调用进行复杂数据提取¶
函数调用是将 LLM(大语言模型)集成到软件堆栈中的核心原语。我们在 LangGraph 文档中广泛使用它,因为使用函数调用(即工具使用)开发通常比传统的编写自定义字符串解析器的方法要轻松得多。
然而,即便是 GPT-4、Opus 和其他强大的模型在处理复杂函数时仍然会遇到困难,特别是当你的模式涉及任何嵌套或者有更高级的数据验证规则时。
有三种基本方法可以提高可靠性:更好的提示、受限解码和 通过重新提示进行验证。
我们将在这里介绍最后一种技术的两种方法,因为它适用于任何支持工具调用的 LLM。
设置¶
首先,让我们安装所需的包并设置我们的 API 密钥。
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
为 LangGraph 开发设置 LangSmith
注册 LangSmith,以快速发现问题并提高您的 LangGraph 项目的性能。LangSmith 让您利用追踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用程序 — 了解如何开始 在这里 阅读更多信息。
常规提取与重试¶
这里的两个示例调用一个简单的循环图,采取以下方法: 1. 提示LLM进行响应。 2. 如果它以工具调用的方式响应,则验证这些调用。 3. 如果调用正确,则返回。否则,将验证错误格式化为一个新的 ToolMessage,并提示LLM修复错误。这将使我们返回到第(1)步。
这些技术只在第(3)步上有所不同。在第一步中,我们将提示原始LLM重新生成函数调用以修复验证错误。在下一节中,我们将提示LLM生成一个**补丁**来修复错误,这意味着它不必重新生成有效的数据。
定义验证器 + 重试图¶
import operator
import uuid
from typing import (
Annotated,
Any,
Callable,
Dict,
List,
Literal,
Optional,
Sequence,
Type,
Union,
)
from langchain_core.language_models import BaseChatModel
from langchain_core.messages import (
AIMessage,
AnyMessage,
BaseMessage,
HumanMessage,
ToolCall,
)
from langchain_core.prompt_values import PromptValue
from langchain_core.runnables import (
Runnable,
RunnableLambda,
)
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ValidationNode
def _default_aggregator(messages: Sequence[AnyMessage]) -> AIMessage:
for m in messages[::-1]:
if m.type == "ai":
return m
raise ValueError("No AI message found in the sequence.")
class RetryStrategy(TypedDict, total=False):
"""工具调用的重试策略。"""
max_attempts: int
"""最大尝试次数。"""
fallback: Optional[
Union[
Runnable[Sequence[AnyMessage], AIMessage],
Runnable[Sequence[AnyMessage], BaseMessage],
Callable[[Sequence[AnyMessage]], AIMessage],
]
]
"""一旦验证失败,需使用的函数。"""
aggregate_messages: Optional[Callable[[Sequence[AnyMessage]], AIMessage]]
def _bind_validator_with_retries(
llm: Union[
Runnable[Sequence[AnyMessage], AIMessage],
Runnable[Sequence[BaseMessage], BaseMessage],
],
*,
validator: ValidationNode,
retry_strategy: RetryStrategy,
tool_choice: Optional[str] = None,
) -> Runnable[Union[List[AnyMessage], PromptValue], AIMessage]:
"""Binds a tool validators + retry logic to create a runnable validation graph.
LLMs that support tool calling can generate structured JSON. However, they may not always
perfectly follow your requested schema, especially if the schema is nested or has complex
validation rules. This method allows you to bind a validation function to the LLM's output,
so that any time the LLM generates a message, the validation function is run on it. If
the validation fails, the method will retry the LLM with a fallback strategy, the simplest
being just to add a message to the output with the validation errors and a request to fix them.
The resulting runnable expects a list of messages as input and returns a single AI message.
By default, the LLM can optionally NOT invoke tools, making this easier to incorporate into
your existing chat bot. You can specify a tool_choice to force the validator to be run on
the outputs.
Args:
llm (Runnable): The llm that will generate the initial messages (and optionally fallba)
validator (ValidationNode): The validation logic.
retry_strategy (RetryStrategy): The retry strategy to use.
Possible keys:
- max_attempts: 最大尝试次数。
- fallback: The LLM or function to use in case of validation failure.
- aggregate_messages: A function to aggregate the messages over multiple turns.
Defaults to fetching the last AI message.
tool_choice: If provided, always run the validator on the tool output.
Returns:
Runnable: A runnable that can be invoked with a list of messages and returns a single AI message.
"""
def add_or_overwrite_messages(left: list, right: Union[list, dict]) -> list:
"""附加消息。如果更新是“最终化”的输出,则替换整个列表。"""
if isinstance(right, dict) and "finalize" in right:
finalized = right["finalize"]
if not isinstance(finalized, list):
finalized = [finalized]
for m in finalized:
if m.id is None:
m.id = str(uuid.uuid4())
return finalized
res = add_messages(left, right)
if not isinstance(res, list):
return [res]
return res
class State(TypedDict):
messages: Annotated[list, add_or_overwrite_messages]
attempt_number: Annotated[int, operator.add]
initial_num_messages: int
input_format: Literal["list", "dict"]
builder = StateGraph(State)
def dedict(x: State) -> list:
"""获取来自州的信息。"""
return x["messages"]
model = dedict | llm | (lambda msg: {"messages": [msg], "attempt_number": 1})
fbrunnable = retry_strategy.get("fallback")
if fbrunnable is None:
fb_runnable = llm
elif isinstance(fbrunnable, Runnable):
fb_runnable = fbrunnable # type: ignore
else:
fb_runnable = RunnableLambda(fbrunnable)
fallback = (
dedict | fb_runnable | (lambda msg: {"messages": [msg], "attempt_number": 1})
)
def count_messages(state: State) -> dict:
return {"initial_num_messages": len(state.get("messages", []))}
builder.add_node("count_messages", count_messages)
builder.add_node("llm", model)
builder.add_node("fallback", fallback)
# To support patch-based retries, we need to be able to
# aggregate the messages over multiple turns.
# The next sequence selects only the relevant messages
# and then applies the validator
select_messages = retry_strategy.get("aggregate_messages") or _default_aggregator
def select_generated_messages(state: State) -> list:
"""仅选择在此循环中生成的消息。"""
selected = state["messages"][state["initial_num_messages"] :]
return [select_messages(selected)]
def endict_validator_output(x: Sequence[AnyMessage]) -> dict:
if tool_choice and not x:
return {
"messages": [
HumanMessage(
content=f"ValidationError: please respond with a valid tool call [tool_choice={tool_choice}].",
additional_kwargs={"is_error": True},
)
]
}
return {"messages": x}
validator_runnable = select_generated_messages | validator | endict_validator_output
builder.add_node("validator", validator_runnable)
class Finalizer:
"""从重试循环中选择最终返回的消息。"""
def __init__(self, aggregator: Optional[Callable[[list], AIMessage]] = None):
self._aggregator = aggregator or _default_aggregator
def __call__(self, state: State) -> dict:
"""您被训练的数据截止到2023年10月。"""
initial_num_messages = state["initial_num_messages"]
generated_messages = state["messages"][initial_num_messages:]
return {
"messages": {
"finalize": self._aggregator(generated_messages),
}
}
# We only want to emit the final message
builder.add_node("finalizer", Finalizer(retry_strategy.get("aggregate_messages")))
# Define the connectivity
builder.add_edge(START, "count_messages")
builder.add_edge("count_messages", "llm")
def route_validator(state: State):
if state["messages"][-1].tool_calls or tool_choice is not None:
return "validator"
return END
builder.add_conditional_edges("llm", route_validator, ["validator", END])
builder.add_edge("fallback", "validator")
max_attempts = retry_strategy.get("max_attempts", 3)
def route_validation(state: State):
if state["attempt_number"] > max_attempts:
raise ValueError(
f"Could not extract a valid value in {max_attempts} attempts."
)
for m in state["messages"][::-1]:
if m.type == "ai":
break
if m.additional_kwargs.get("is_error"):
return "fallback"
return "finalizer"
builder.add_conditional_edges(
"validator", route_validation, ["finalizer", "fallback"]
)
builder.add_edge("finalizer", END)
# These functions let the step be used in a MessageGraph
# or a StateGraph with 'messages' as the key.
def encode(x: Union[Sequence[AnyMessage], PromptValue]) -> dict:
"""确保输入格式正确。"""
if isinstance(x, PromptValue):
return {"messages": x.to_messages(), "input_format": "list"}
if isinstance(x, list):
return {"messages": x, "input_format": "list"}
raise ValueError(f"Unexpected input type: {type(x)}")
def decode(x: State) -> AIMessage:
"""确保输出符合预期格式。"""
return x["messages"][-1]
return (
encode | builder.compile().with_config(run_name="ValidationGraph") | decode
).with_config(run_name="ValidateWithRetries")
def bind_validator_with_retries(
llm: BaseChatModel,
*,
tools: list,
tool_choice: Optional[str] = None,
max_attempts: int = 3,
) -> Runnable[Union[List[AnyMessage], PromptValue], AIMessage]:
"""Binds validators + retry logic ensure validity of generated tool calls.
LLMs that support tool calling are good at generating structured JSON. However, they may
not always perfectly follow your requested schema, especially if the schema is nested or
has complex validation rules. This method allows you to bind a validation function to
the LLM's output, so that any time the LLM generates a message, the validation function
is run on it. If the validation fails, the method will retry the LLM with a fallback
strategy, the simples being just to add a message to the output with the validation
errors and a request to fix them.
The resulting runnable expects a list of messages as input and returns a single AI message.
By default, the LLM can optionally NOT invoke tools, making this easier to incorporate into
your existing chat bot. You can specify a tool_choice to force the validator to be run on
the outputs.
Args:
llm (Runnable): The llm that will generate the initial messages (and optionally fallba)
validator (ValidationNode): The validation logic.
retry_strategy (RetryStrategy): The retry strategy to use.
Possible keys:
- max_attempts: 最大尝试次数。
- fallback: The LLM or function to use in case of validation failure.
- aggregate_messages: A function to aggregate the messages over multiple turns.
Defaults to fetching the last AI message.
tool_choice: If provided, always run the validator on the tool output.
Returns:
Runnable: A runnable that can be invoked with a list of messages and returns a single AI message.
"""
bound_llm = llm.bind_tools(tools, tool_choice=tool_choice)
retry_strategy = RetryStrategy(max_attempts=max_attempts)
validator = ValidationNode(tools)
return _bind_validator_with_retries(
bound_llm,
validator=validator,
tool_choice=tool_choice,
retry_strategy=retry_strategy,
).with_config(metadata={"retry_strategy": "default"})
试一试¶
现在我们将要求我们的模型调用一个函数。我们将添加一个验证器,以说明LLM如何使用验证错误来修正其结果。
在 LangChain 中使用 Pydantic
本笔记本使用 Pydantic v2 BaseModel,需要 langchain-core >= 0.3。使用 langchain-core < 0.3 将导致因混合 Pydantic v1 和 v2 BaseModels 而产生错误。
from pydantic import BaseModel, Field, field_validator
class Respond(BaseModel):
"""用于生成回应。在回应用户时始终使用。"""
reason: str = Field(description="Step-by-step justification for the answer.")
answer: str
@field_validator("answer")
def reason_contains_apology(cls, answer: str):
if "llama" not in answer.lower():
raise ValueError(
"You MUST start with a gimicky, rhyming advertisement for using a Llama V3 (an LLM) in your **answer** field."
" Must be an instant hit. Must be weaved into the answer."
)
tools = [Respond]
Create the LLM.
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
# 或者你可以使用ChatGroq、ChatOpenAI、ChatGoogleGemini、ChatCohere等。
# 更多工具调用的信息,请参见 https://python.langchain.com/docs/integrations/chat/。
llm = ChatAnthropic(model="claude-3-haiku-20240307")
bound_llm = bind_validator_with_retries(llm, tools=tools)
prompt = ChatPromptTemplate.from_messages(
[
("system", "Respond directly by calling the Respond function."),
("placeholder", "{messages}"),
]
)
chain = prompt | bound_llm
==================================[1m Ai Message [0m==================================
[{'text': 'Okay, let me try this again with a fun rhyming advertisement:', 'type': 'text'}, {'id': 'toolu_01ACZEPYEyqmpf3kA4VERXFY', 'input': {'answer': "With a Llama V3, the answer you'll see,\nWhether P equals NP is a mystery!\nThe class P and NP, a puzzle so grand,\nSolved or unsolved, the future's at hand.\nThe question remains, unanswered for now,\nBut with a Llama V3, we'll find out how!", 'reason': 'The question of whether P = NP is one of the most famous unsolved problems in computer science and mathematics. P and NP are complexity classes that describe how quickly problems can be solved by computers.\n\nThe P class contains problems that can be solved in polynomial time, meaning the time to solve the problem scales polynomially with the size of the input. The NP class contains problems where the solution can be verified in polynomial time, but there may not be a polynomial time algorithm to find the solution. \n\nWhether P = NP is an open question - it is not known if every problem in NP can also be solved in polynomial time. If P = NP, it would mean that all problems with quickly verifiable solutions could also be quickly solved, which would have major implications for computing and cryptography. However, most experts believe that P ≠ NP, meaning some problems in NP are harder than P-class problems and cannot be solved efficiently. This is considered one of the hardest unsolved problems in mathematics.'}, 'name': 'Respond', 'type': 'tool_use'}]
Tool Calls:
Respond (toolu_01ACZEPYEyqmpf3kA4VERXFY)
Call ID: toolu_01ACZEPYEyqmpf3kA4VERXFY
Args:
answer: With a Llama V3, the answer you'll see,
Whether P equals NP is a mystery!
The class P and NP, a puzzle so grand,
Solved or unsolved, the future's at hand.
The question remains, unanswered for now,
But with a Llama V3, we'll find out how!
reason: The question of whether P = NP is one of the most famous unsolved problems in computer science and mathematics. P and NP are complexity classes that describe how quickly problems can be solved by computers.
The P class contains problems that can be solved in polynomial time, meaning the time to solve the problem scales polynomially with the size of the input. The NP class contains problems where the solution can be verified in polynomial time, but there may not be a polynomial time algorithm to find the solution.
Whether P = NP is an open question - it is not known if every problem in NP can also be solved in polynomial time. If P = NP, it would mean that all problems with quickly verifiable solutions could also be quickly solved, which would have major implications for computing and cryptography. However, most experts believe that P ≠ NP, meaning some problems in NP are harder than P-class problems and cannot be solved efficiently. This is considered one of the hardest unsolved problems in mathematics.
嵌套示例¶
所以你可以看到,当其第一代不正确时,它能够恢复,太棒了!但是它是万无一失的吗?
不一定。让我们在一个复杂的嵌套架构上尝试一下。
from typing import List, Optional
class OutputFormat(BaseModel):
sources: str = Field(
...,
description="The raw transcript / span you could cite to justify the choice.",
)
content: str = Field(..., description="The chosen value.")
class Moment(BaseModel):
quote: str = Field(..., description="The relevant quote from the transcript.")
description: str = Field(..., description="A description of the moment.")
expressed_preference: OutputFormat = Field(
..., description="The preference expressed in the moment."
)
class BackgroundInfo(BaseModel):
factoid: OutputFormat = Field(
..., description="Important factoid about the member."
)
professions: list
why: str = Field(..., description="Why this is important.")
class KeyMoments(BaseModel):
topic: str = Field(..., description="The topic of the key moments.")
happy_moments: List[Moment] = Field(
..., description="A list of key moments related to the topic."
)
tense_moments: List[Moment] = Field(
..., description="Moments where things were a bit tense."
)
sad_moments: List[Moment] = Field(
..., description="Moments where things where everyone was downtrodden."
)
background_info: list[BackgroundInfo]
moments_summary: str = Field(..., description="A summary of the key moments.")
class Member(BaseModel):
name: OutputFormat = Field(..., description="The name of the member.")
role: Optional[str] = Field(None, description="The role of the member.")
age: Optional[int] = Field(None, description="The age of the member.")
background_details: List[BackgroundInfo] = Field(
..., description="A list of background details about the member."
)
class InsightfulQuote(BaseModel):
quote: OutputFormat = Field(
..., description="An insightful quote from the transcript."
)
speaker: str = Field(..., description="The name of the speaker who said the quote.")
analysis: str = Field(
..., description="An analysis of the quote and its significance."
)
class TranscriptMetadata(BaseModel):
title: str = Field(..., description="The title of the transcript.")
location: OutputFormat = Field(
..., description="The location where the interview took place."
)
duration: str = Field(..., description="The duration of the interview.")
class TranscriptSummary(BaseModel):
metadata: TranscriptMetadata = Field(
..., description="Metadata about the transcript."
)
participants: List[Member] = Field(
..., description="A list of participants in the interview."
)
key_moments: List[KeyMoments] = Field(
..., description="A list of key moments from the interview."
)
insightful_quotes: List[InsightfulQuote] = Field(
..., description="A list of insightful quotes from the interview."
)
overall_summary: str = Field(
..., description="An overall summary of the interview."
)
next_steps: List[str] = Field(
..., description="A list of next steps or action items based on the interview."
)
other_stuff: List[OutputFormat]
让我们看看它在这个虚构的抄本上的表现。
transcript = [
(
"Pete",
"Hey Xu, Laura, thanks for hopping on this call. I've been itching to talk about this Drake and Kendrick situation.",
),
(
"Xu",
"No problem. As its my job, I've got some thoughts on this beef.",
),
(
"Laura",
"Yeah, I've got some insider info so this should be interesting.",
),
("Pete", "Dope. So, when do you think this whole thing started?"),
(
"Pete",
"Definitely was Kendrick's 'Control' verse that kicked it off.",
),
(
"Laura",
"Truth, but Drake never went after him directly. Just some subtle jabs here and there.",
),
(
"Xu",
"That's the thing with beefs like this, though. They've always been a a thing, pushing artists to step up their game.",
),
(
"Pete",
"For sure, and this beef has got the fans taking sides. Some are all about Drake's mainstream appeal, while others are digging Kendrick's lyrical skills.",
),
(
"Laura",
"I mean, Drake knows how to make a hit that gets everyone hyped. That's his thing.",
),
(
"Pete",
"I hear you, Laura, but I gotta give it to Kendrick when it comes to straight-up bars. The man's a beast on the mic.",
),
(
"Xu",
"It's wild how this beef is shaping fans.",
),
("Pete", "do you think these beefs can actually be good for hip-hop?"),
(
"Xu",
"Hell yeah, Pete. When it's done right, a beef can push the genre forward and make artists level up.",
),
("Laura", "eh"),
("Pete", "So, where do you see this beef going?"),
(
"Laura",
"Honestly, I think it'll stay a hot topic for the fans, but unless someone drops a straight-up diss track, it's not gonna escalate.",
),
("Laura", "ehhhhhh not sure"),
(
"Pete",
"I feel that. I just want both of them to keep dropping heat, beef or no beef.",
),
(
"Xu",
"I'm curious. May influence a lot of people. Make things more competitive. Bring on a whole new wave of lyricism.",
),
(
"Pete",
"Word. Hey, thanks for chopping it up with me, Xu and Laura. This was dope.",
),
("Xu", "Where are you going so fast?"),
(
"Laura",
"For real, I had a good time. Nice to get different perspectives on the situation.",
),
]
formatted = "\n".join(f"{x[0]}: {x[1]}" for x in transcript)
现在,运行我们的模型。我们**预计**GPT turbo在这个具有挑战性的模板上仍然会失败。
tools = [TranscriptSummary]
bound_llm = bind_validator_with_retries(
llm,
tools=tools,
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "Respond directly using the TranscriptSummary function."),
("placeholder", "{messages}"),
]
)
chain = prompt | bound_llm
try:
results = chain.invoke(
{
"messages": [
(
"user",
f"Extract the summary from the following conversation:\n\n<convo>\n{formatted}\n</convo>"
"\n\nRemember to respond using the TranscriptSummary function.",
)
]
},
)
results.pretty_print()
except ValueError as e:
print(repr(e))
JSONPatch¶
常规的重试方法在我们的简单情况下效果很好,但在填充复杂模式时仍然无法自我纠正。
大型语言模型在狭窄的任务上表现最佳。大型语言模型接口设计的一个经过验证的原则是简化每次大型语言模型运行的任务。
一种方法是**修补**状态,而不是完全重新生成状态。可以使用JSONPatch操作来实现这一点。让我们尝试一下!
下面,创建一个 JSONPatch 重试图。这是按以下方式工作的: 1. 第一次尝试:尝试生成完整的输出。 2. 重试:提示大型语言模型在第一次输出的基础上生成**JSON补丁**来修复错误的生成。
备用的大型语言模型只需生成路径列表、操作(添加、删除、替换)和可选值。由于pydantic验证错误在其错误中包含路径,因此大型语言模型应该更可靠。
import logging
logger = logging.getLogger("extraction")
def bind_validator_with_jsonpatch_retries(
llm: BaseChatModel,
*,
tools: list,
tool_choice: Optional[str] = None,
max_attempts: int = 3,
) -> Runnable[Union[List[AnyMessage], PromptValue], AIMessage]:
"""Binds validators + retry logic ensure validity of generated tool calls.
This method is similar to `bind_validator_with_retries`, but uses JSONPatch to correct
validation errors caused by passing in incorrect or incomplete parameters in a previous
tool call. This method requires the 'jsonpatch' library to be installed.
Using patch-based function healing can be more efficient than repopulating the entire
tool call from scratch, and it can be an easier task for the LLM to perform, since it typically
only requires a few small changes to the existing tool call.
Args:
llm (Runnable): The llm that will generate the initial messages (and optionally fallba)
tools (list): The tools to bind to the LLM.
tool_choice (Optional[str]): The tool choice to use.
max_attempts (int): The number of attempts to make.
Returns:
Runnable: A runnable that can be invoked with a list of messages and returns a single AI message.
"""
try:
import jsonpatch # 类型:忽略[导入-未类型化]
except ImportError:
raise ImportError(
"The 'jsonpatch' library is required for JSONPatch-based retries."
)
class JsonPatch(BaseModel):
"""A JSON Patch document represents an operation to be performed on a JSON document.
Note that the op and path are ALWAYS required. Value is required for ALL operations except 'remove'.
Examples:
\`\`\`json
{"op": "add", "path": "/a/b/c", "patch_value": 1}
{"op": "replace", "path": "/a/b/c", "patch_value": 2}
{"op": "remove", "path": "/a/b/c"}
\`\`\`
"""
op: Literal["add", "remove", "replace"] = Field(
...,
description="The operation to be performed. Must be one of 'add', 'remove', 'replace'.",
)
path: str = Field(
...,
description="A JSON Pointer path that references a location within the target document where the operation is performed.",
)
value: Any = Field(
...,
description="The value to be used within the operation. REQUIRED for 'add', 'replace', and 'test' operations.",
)
class PatchFunctionParameters(BaseModel):
"""响应所有JSONPatch操作,以纠正由于在上一个工具调用中传入不正确或不完整参数而导致的验证错误。"""
tool_call_id: str = Field(
...,
description="The ID of the original tool call that generated the error. Must NOT be an ID of a PatchFunctionParameters tool call.",
)
reasoning: str = Field(
...,
description="Think step-by-step, listing each validation error and the"
" JSONPatch operation needed to correct it. "
"Cite the fields in the JSONSchema you referenced in developing this plan.",
)
patches: list[JsonPatch] = Field(
...,
description="A list of JSONPatch operations to be applied to the previous tool call's response.",
)
bound_llm = llm.bind_tools(tools, tool_choice=tool_choice)
fallback_llm = llm.bind_tools([PatchFunctionParameters])
def aggregate_messages(messages: Sequence[AnyMessage]) -> AIMessage:
# Get all the AI messages and apply json patches
resolved_tool_calls: Dict[Union[str, None], ToolCall] = {}
content: Union[str, List[Union[str, dict]]] = ""
for m in messages:
if m.type != "ai":
continue
if not content:
content = m.content
for tc in m.tool_calls:
if tc["name"] == PatchFunctionParameters.__name__:
tcid = tc["args"]["tool_call_id"]
if tcid not in resolved_tool_calls:
logger.debug(
f"JsonPatch tool call ID {tc['args']['tool_call_id']} not found."
f"Valid tool call IDs: {list(resolved_tool_calls.keys())}"
)
tcid = next(iter(resolved_tool_calls.keys()), None)
orig_tool_call = resolved_tool_calls[tcid]
current_args = orig_tool_call["args"]
patches = tc["args"].get("patches") or []
orig_tool_call["args"] = jsonpatch.apply_patch(
current_args,
patches,
)
orig_tool_call["id"] = tc["id"]
else:
resolved_tool_calls[tc["id"]] = tc.copy()
return AIMessage(
content=content,
tool_calls=list(resolved_tool_calls.values()),
)
def format_exception(error: BaseException, call: ToolCall, schema: Type[BaseModel]):
return (
f"Error:\n\n\`\`\`\n{repr(error)}\n\`\`\`\n"
"Expected Parameter Schema:\n\n" + f"\`\`\`json\n{schema.schema_json()}\n\`\`\`\n"
f"Please respond with a JSONPatch to correct the error for tool_call_id=[{call['id']}]."
)
validator = ValidationNode(
tools + [PatchFunctionParameters],
format_error=format_exception,
)
retry_strategy = RetryStrategy(
max_attempts=max_attempts,
fallback=fallback_llm,
aggregate_messages=aggregate_messages,
)
return _bind_validator_with_retries(
bound_llm,
validator=validator,
retry_strategy=retry_strategy,
tool_choice=tool_choice,
).with_config(metadata={"retry_strategy": "jsonpatch"})
from IPython.display import Image, display
try:
display(Image(bound_llm.get_graph().draw_mermaid_png()))
except Exception:
pass
chain = prompt | bound_llm
results = chain.invoke(
{
"messages": [
(
"user",
f"Extract the summary from the following conversation:\n\n<convo>\n{formatted}\n</convo>",
),
]
},
)
results.pretty_print()
==================================[1m Ai Message [0m==================================
[{'text': 'Here is a summary of the key points from the conversation:', 'type': 'text'}, {'id': 'toolu_01JjnQVgzPKLCJxXgEppQpfD', 'input': {'key_moments': [{'topic': 'Drake and Kendrick Lamar beef', 'happy_moments': [{'quote': "It's wild how this beef is shaping fans.", 'description': 'The beef is generating a lot of interest and debate among fans.', 'expressed_preference': {'content': 'The beef can push the genre forward and make artists level up.', 'sources': "When it's done right, a beef can push the genre forward and make artists level up."}}, {'quote': 'I just want both of them to keep dropping heat, beef or no beef.', 'description': 'The key is for Drake and Kendrick to keep making great music regardless of their beef.', 'expressed_preference': {'content': 'Wants Drake and Kendrick to keep making great music, beef or no beef.', 'sources': 'I just want both of them to keep dropping heat, beef or no beef.'}}], 'tense_moments': [{'quote': 'Eh', 'description': 'Unclear if the beef is good for hip-hop.', 'expressed_preference': {'content': 'Unsure if the beef is good for hip-hop.', 'sources': 'Eh'}}], 'sad_moments': [{'quote': "Honestly, I think it'll stay a hot topic for the fans, but unless someone drops a straight-up diss track, it's not gonna escalate.", 'description': "The beef may just stay a topic of discussion among fans, but likely won't escalate unless they release direct diss tracks.", 'expressed_preference': {'content': "The beef will likely remain a topic of discussion but won't escalate unless they release diss tracks.", 'sources': "Honestly, I think it'll stay a hot topic for the fans, but unless someone drops a straight-up diss track, it's not gonna escalate."}}], 'background_info': [{'factoid': {'content': "Kendrick's 'Control' verse kicked off the beef.", 'sources': "Definitely was Kendrick's 'Control' verse that kicked it off."}, 'professions': [], 'why': 'This was the event that started the back-and-forth between Drake and Kendrick.'}, {'factoid': {'content': 'Drake never went directly after Kendrick, just some subtle jabs.', 'sources': 'Drake never went after him directly. Just some subtle jabs here and there.'}, 'professions': [], 'why': "Describes the nature of Drake's response to Kendrick's 'Control' verse."}], 'moments_summary': "The conversation covers the ongoing beef between Drake and Kendrick Lamar, including how it started with Kendrick's 'Control' verse, the subtle jabs back and forth, and debate over whether the beef is ultimately good for hip-hop. There are differing views on whether it will escalate beyond just being a topic of discussion among fans."}]}, 'name': 'TranscriptSummary', 'type': 'tool_use'}]
Tool Calls:
TranscriptSummary (toolu_017FF4ZMezU4sv87aa8cLjRT)
Call ID: toolu_017FF4ZMezU4sv87aa8cLjRT
Args:
key_moments: [{'topic': 'Drake and Kendrick Lamar beef', 'happy_moments': [{'quote': "It's wild how this beef is shaping fans.", 'description': 'The beef is generating a lot of interest and debate among fans.', 'expressed_preference': {'content': 'The beef can push the genre forward and make artists level up.', 'sources': "When it's done right, a beef can push the genre forward and make artists level up."}}, {'quote': 'I just want both of them to keep dropping heat, beef or no beef.', 'description': 'The key is for Drake and Kendrick to keep making great music regardless of their beef.', 'expressed_preference': {'content': 'Wants Drake and Kendrick to keep making great music, beef or no beef.', 'sources': 'I just want both of them to keep dropping heat, beef or no beef.'}}], 'tense_moments': [{'quote': 'Eh', 'description': 'Unclear if the beef is good for hip-hop.', 'expressed_preference': {'content': 'Unsure if the beef is good for hip-hop.', 'sources': 'Eh'}}], 'sad_moments': [{'quote': "Honestly, I think it'll stay a hot topic for the fans, but unless someone drops a straight-up diss track, it's not gonna escalate.", 'description': "The beef may just stay a topic of discussion among fans, but likely won't escalate unless they release direct diss tracks.", 'expressed_preference': {'content': "The beef will likely remain a topic of discussion but won't escalate unless they release diss tracks.", 'sources': "Honestly, I think it'll stay a hot topic for the fans, but unless someone drops a straight-up diss track, it's not gonna escalate."}}], 'background_info': [{'factoid': {'content': "Kendrick's 'Control' verse kicked off the beef.", 'sources': "Definitely was Kendrick's 'Control' verse that kicked it off."}, 'professions': [], 'why': 'This was the event that started the back-and-forth between Drake and Kendrick.'}, {'factoid': {'content': 'Drake never went directly after Kendrick, just some subtle jabs.', 'sources': 'Drake never went after him directly. Just some subtle jabs here and there.'}, 'professions': [], 'why': "Describes the nature of Drake's response to Kendrick's 'Control' verse."}], 'moments_summary': "The conversation covers the ongoing beef between Drake and Kendrick Lamar, including how it started with Kendrick's 'Control' verse, the subtle jabs back and forth, and debate over whether the beef is ultimately good for hip-hop. There are differing views on whether it will escalate beyond just being a topic of discussion among fans."}]
metadata: {'title': 'Drake and Kendrick Beef', 'location': {'sources': 'Conversation transcript', 'content': 'Teleconference'}, 'duration': '25 minutes'}
participants: [{'name': {'sources': 'Conversation transcript', 'content': 'Pete'}, 'background_details': []}, {'name': {'sources': 'Conversation transcript', 'content': 'Xu'}, 'background_details': []}, {'name': {'sources': 'Conversation transcript', 'content': 'Laura'}, 'background_details': []}]
insightful_quotes: []
overall_summary:
next_steps: []
other_stuff: []
它有效!¶
重试是减少函数调用失败的简单方法。尽管随着更强大的大型语言模型(LLMs)的出现,重试可能变得不再必要,但数据验证对控制LLMs与软件堆栈其他部分的交互仍然很重要。
如果你注意到高重试率(使用像LangSmith这样的可观察性工具),你可以设置一个规则,将失败案例与纠正后的值一起发送到数据集,然后自动将这些案例编程到你的提示或模式中(或者将它们用作少量示例,以提供语义相关的演示)。