跳至内容

Agents

ToolsToFinalOutputFunction module-attribute

ToolsToFinalOutputFunction: TypeAlias = Callable[
    [RunContextWrapper[TContext], list[FunctionToolResult]],
    MaybeAwaitable[ToolsToFinalOutputResult],
]

一个接收运行上下文和工具结果列表的函数,返回一个ToolToFinalOutputResult

工具最终输出结果 dataclass

Source code in src/agents/agent.py
@dataclass
class ToolsToFinalOutputResult:
    is_final_output: bool
    """Whether this is the final output. If False, the LLM will run again and receive the tool call
    output.
    """

    final_output: Any | None = None
    """The final output. Can be None if `is_final_output` is False, otherwise must match the
    `output_type` of the agent.
    """

is_final_output instance-attribute

is_final_output: bool

这是否为最终输出。如果为False,LLM将再次运行并接收工具调用的输出。

最终输出 class-attribute instance-attribute

final_output: Any | None = None

最终输出。如果is_final_output为False则可以为None,否则必须与代理的output_type匹配。

StopAtTools

基类: TypedDict

Source code in src/agents/agent.py
class StopAtTools(TypedDict):
    stop_at_tool_names: list[str]
    """A list of tool names, any of which will stop the agent from running further."""

stop_at_tool_names instance-attribute

stop_at_tool_names: list[str]

工具名称列表,其中任何一个都会阻止代理继续运行。

智能体 dataclass

基类: Generic[TContext]

代理(agent)是一种配置了指令、工具、防护措施、交接机制等的AI模型。

我们强烈建议传递instructions参数,这是代理的"系统提示"。此外,您可以传递handoff_description参数,这是一个人类可读的代理描述,当代理在工具/交接中使用时会用到。

代理(Agents)在上下文类型上是通用的。上下文是您创建的(可变)对象。它会被传递给工具函数、交接处理、防护机制等。

Source code in src/agents/agent.py
@dataclass
class Agent(Generic[TContext]):
    """An agent is an AI model configured with instructions, tools, guardrails, handoffs and more.

    We strongly recommend passing `instructions`, which is the "system prompt" for the agent. In
    addition, you can pass `handoff_description`, which is a human-readable description of the
    agent, used when the agent is used inside tools/handoffs.

    Agents are generic on the context type. The context is a (mutable) object you create. It is
    passed to tool functions, handoffs, guardrails, etc.
    """

    name: str
    """The name of the agent."""

    instructions: (
        str
        | Callable[
            [RunContextWrapper[TContext], Agent[TContext]],
            MaybeAwaitable[str],
        ]
        | None
    ) = None
    """The instructions for the agent. Will be used as the "system prompt" when this agent is
    invoked. Describes what the agent should do, and how it responds.

    Can either be a string, or a function that dynamically generates instructions for the agent. If
    you provide a function, it will be called with the context and the agent instance. It must
    return a string.
    """

    handoff_description: str | None = None
    """A description of the agent. This is used when the agent is used as a handoff, so that an
    LLM knows what it does and when to invoke it.
    """

    handoffs: list[Agent[Any] | Handoff[TContext]] = field(default_factory=list)
    """Handoffs are sub-agents that the agent can delegate to. You can provide a list of handoffs,
    and the agent can choose to delegate to them if relevant. Allows for separation of concerns and
    modularity.
    """

    model: str | Model | None = None
    """The model implementation to use when invoking the LLM.

    By default, if not set, the agent will use the default model configured in
    `model_settings.DEFAULT_MODEL`.
    """

    model_settings: ModelSettings = field(default_factory=ModelSettings)
    """Configures model-specific tuning parameters (e.g. temperature, top_p).
    """

    tools: list[Tool] = field(default_factory=list)
    """A list of tools that the agent can use."""

    mcp_servers: list[MCPServer] = field(default_factory=list)
    """A list of [Model Context Protocol](https://modelcontextprotocol.io/) servers that
    the agent can use. Every time the agent runs, it will include tools from these servers in the
    list of available tools.

    NOTE: You are expected to manage the lifecycle of these servers. Specifically, you must call
    `server.connect()` before passing it to the agent, and `server.cleanup()` when the server is no
    longer needed.
    """

    input_guardrails: list[InputGuardrail[TContext]] = field(default_factory=list)
    """A list of checks that run in parallel to the agent's execution, before generating a
    response. Runs only if the agent is the first agent in the chain.
    """

    output_guardrails: list[OutputGuardrail[TContext]] = field(default_factory=list)
    """A list of checks that run on the final output of the agent, after generating a response.
    Runs only if the agent produces a final output.
    """

    output_type: type[Any] | None = None
    """The type of the output object. If not provided, the output will be `str`."""

    hooks: AgentHooks[TContext] | None = None
    """A class that receives callbacks on various lifecycle events for this agent.
    """

    tool_use_behavior: (
        Literal["run_llm_again", "stop_on_first_tool"] | StopAtTools | ToolsToFinalOutputFunction
    ) = "run_llm_again"
    """This lets you configure how tool use is handled.
    - "run_llm_again": The default behavior. Tools are run, and then the LLM receives the results
        and gets to respond.
    - "stop_on_first_tool": The output of the first tool call is used as the final output. This
        means that the LLM does not process the result of the tool call.
    - A list of tool names: The agent will stop running if any of the tools in the list are called.
        The final output will be the output of the first matching tool call. The LLM does not
        process the result of the tool call.
    - A function: If you pass a function, it will be called with the run context and the list of
      tool results. It must return a `ToolToFinalOutputResult`, which determines whether the tool
      calls result in a final output.

      NOTE: This configuration is specific to FunctionTools. Hosted tools, such as file search,
      web search, etc are always processed by the LLM.
    """

    reset_tool_choice: bool = True
    """Whether to reset the tool choice to the default value after a tool has been called. Defaults
    to True. This ensures that the agent doesn't enter an infinite loop of tool usage."""

    def clone(self, **kwargs: Any) -> Agent[TContext]:
        """Make a copy of the agent, with the given arguments changed. For example, you could do:
        ```
        new_agent = agent.clone(instructions="New instructions")
        ```
        """
        return dataclasses.replace(self, **kwargs)

    def as_tool(
        self,
        tool_name: str | None,
        tool_description: str | None,
        custom_output_extractor: Callable[[RunResult], Awaitable[str]] | None = None,
    ) -> Tool:
        """Transform this agent into a tool, callable by other agents.

        This is different from handoffs in two ways:
        1. In handoffs, the new agent receives the conversation history. In this tool, the new agent
           receives generated input.
        2. In handoffs, the new agent takes over the conversation. In this tool, the new agent is
           called as a tool, and the conversation is continued by the original agent.

        Args:
            tool_name: The name of the tool. If not provided, the agent's name will be used.
            tool_description: The description of the tool, which should indicate what it does and
                when to use it.
            custom_output_extractor: A function that extracts the output from the agent. If not
                provided, the last message from the agent will be used.
        """

        @function_tool(
            name_override=tool_name or _transforms.transform_string_function_style(self.name),
            description_override=tool_description or "",
        )
        async def run_agent(context: RunContextWrapper, input: str) -> str:
            from .run import Runner

            output = await Runner.run(
                starting_agent=self,
                input=input,
                context=context.context,
            )
            if custom_output_extractor:
                return await custom_output_extractor(output)

            return ItemHelpers.text_message_outputs(output.new_items)

        return run_agent

    async def get_system_prompt(self, run_context: RunContextWrapper[TContext]) -> str | None:
        """Get the system prompt for the agent."""
        if isinstance(self.instructions, str):
            return self.instructions
        elif callable(self.instructions):
            if inspect.iscoroutinefunction(self.instructions):
                return await cast(Awaitable[str], self.instructions(run_context, self))
            else:
                return cast(str, self.instructions(run_context, self))
        elif self.instructions is not None:
            logger.error(f"Instructions must be a string or a function, got {self.instructions}")

        return None

    async def get_mcp_tools(self) -> list[Tool]:
        """Fetches the available tools from the MCP servers."""
        return await MCPUtil.get_all_function_tools(self.mcp_servers)

    async def get_all_tools(self) -> list[Tool]:
        """All agent tools, including MCP tools and function tools."""
        mcp_tools = await self.get_mcp_tools()
        return mcp_tools + self.tools

名称 instance-attribute

name: str

代理的名称。

使用说明 class-attribute instance-attribute

instructions: (
    str
    | Callable[
        [RunContextWrapper[TContext], Agent[TContext]],
        MaybeAwaitable[str],
    ]
    | None
) = None

代理的指令。当调用此代理时,将用作"系统提示"。描述代理应该做什么以及如何响应。

可以是一个字符串,也可以是一个为智能体动态生成指令的函数。如果提供的是函数,该函数将被传入上下文和智能体实例进行调用。它必须返回一个字符串。

交接描述 class-attribute instance-attribute

handoff_description: str | None = None

代理的描述。当代理被用作交接时,这会用于让LLM知道它的功能以及何时调用它。

任务交接 class-attribute instance-attribute

handoffs: list[Agent[Any] | Handoff[TContext]] = field(
    default_factory=list
)

交接代理是主代理可以委派的子代理。您可以提供一个交接代理列表,主代理在相关情况下可以选择将任务委派给它们。这有助于实现关注点分离和模块化。

模型 class-attribute instance-attribute

model: str | Model | None = None

调用LLM时使用的模型实现。

默认情况下,如果未设置,代理将使用model_settings.DEFAULT_MODEL中配置的默认模型。

模型设置 class-attribute instance-attribute

model_settings: ModelSettings = field(
    default_factory=ModelSettings
)

配置模型特定的调优参数(例如温度、top_p)。

工具 class-attribute instance-attribute

tools: list[Tool] = field(default_factory=list)

代理可以使用的工具列表。

mcp_servers class-attribute instance-attribute

mcp_servers: list[MCPServer] = field(default_factory=list)

一个Model Context Protocol服务器列表,代理可以使用这些服务器。每次代理运行时,都会从这些服务器中包含工具到可用工具列表中。

注意:您需要管理这些服务器的生命周期。具体来说,在将服务器传递给代理之前必须调用server.connect(),当不再需要服务器时必须调用server.cleanup()

输入防护栏 class-attribute instance-attribute

input_guardrails: list[InputGuardrail[TContext]] = field(
    default_factory=list
)

在生成响应之前,与代理执行并行运行的一系列检查。仅当代理是链中的第一个代理时才会运行。

输出防护栏 class-attribute instance-attribute

output_guardrails: list[OutputGuardrail[TContext]] = field(
    default_factory=list
)

在生成响应后,对代理最终输出运行的一系列检查。 仅在代理产生最终输出时运行。

输出类型 class-attribute instance-attribute

output_type: type[Any] | None = None

输出对象的类型。如果未提供,输出将为str

钩子 class-attribute instance-attribute

hooks: AgentHooks[TContext] | None = None

一个接收该代理各种生命周期事件回调的类。

工具使用行为 class-attribute instance-attribute

tool_use_behavior: (
    Literal["run_llm_again", "stop_on_first_tool"]
    | StopAtTools
    | ToolsToFinalOutputFunction
) = "run_llm_again"

这允许您配置工具使用的处理方式。 - "run_llm_again":默认行为。工具运行后,LLM会接收结果并生成响应。 - "stop_on_first_tool":第一个工具调用的输出将作为最终输出。这意味着LLM不会处理工具调用的结果。 - 工具名称列表:如果调用了列表中的任何工具,代理将停止运行。最终输出将是第一个匹配工具调用的输出。LLM不会处理工具调用的结果。 - 函数:如果传入一个函数,它将被调用并传入运行上下文和工具结果列表。它必须返回一个ToolToFinalOutputResult,用于确定工具调用是否产生最终输出。

注意:此配置仅适用于FunctionTools。托管工具,如文件搜索、网络搜索等,始终由LLM处理。

reset_tool_choice class-attribute instance-attribute

reset_tool_choice: bool = True

是否在调用工具后将工具选择重置为默认值。默认为True。这可以确保代理不会陷入工具使用的无限循环。

克隆

clone(**kwargs: Any) -> Agent[TContext]

复制代理,并修改给定的参数。例如,您可以这样做:

new_agent = agent.clone(instructions="New instructions")

Source code in src/agents/agent.py
def clone(self, **kwargs: Any) -> Agent[TContext]:
    """Make a copy of the agent, with the given arguments changed. For example, you could do:
    ```
    new_agent = agent.clone(instructions="New instructions")
    ```
    """
    return dataclasses.replace(self, **kwargs)

as_tool

as_tool(
    tool_name: str | None,
    tool_description: str | None,
    custom_output_extractor: Callable[
        [RunResult], Awaitable[str]
    ]
    | None = None,
) -> Tool

将此代理转换为工具,可供其他代理调用。

这与交接有以下两点不同: 1. 在交接过程中,新代理会接收对话历史记录。而在此工具中,新代理接收的是生成的输入。 2. 在交接过程中,新代理会接管对话。而在此工具中,新代理是被作为工具调用的,对话仍由原始代理继续。

参数:

名称 类型 描述 默认值
tool_name str | None

工具的名称。如果未提供,将使用代理的名称。

required
tool_description str | None

该工具的说明,应指明其功能及适用场景。

required
custom_output_extractor Callable[[RunResult], Awaitable[str]] | None

一个用于从代理中提取输出的函数。如果未提供,则将使用代理的最后一条消息。

None
Source code in src/agents/agent.py
def as_tool(
    self,
    tool_name: str | None,
    tool_description: str | None,
    custom_output_extractor: Callable[[RunResult], Awaitable[str]] | None = None,
) -> Tool:
    """Transform this agent into a tool, callable by other agents.

    This is different from handoffs in two ways:
    1. In handoffs, the new agent receives the conversation history. In this tool, the new agent
       receives generated input.
    2. In handoffs, the new agent takes over the conversation. In this tool, the new agent is
       called as a tool, and the conversation is continued by the original agent.

    Args:
        tool_name: The name of the tool. If not provided, the agent's name will be used.
        tool_description: The description of the tool, which should indicate what it does and
            when to use it.
        custom_output_extractor: A function that extracts the output from the agent. If not
            provided, the last message from the agent will be used.
    """

    @function_tool(
        name_override=tool_name or _transforms.transform_string_function_style(self.name),
        description_override=tool_description or "",
    )
    async def run_agent(context: RunContextWrapper, input: str) -> str:
        from .run import Runner

        output = await Runner.run(
            starting_agent=self,
            input=input,
            context=context.context,
        )
        if custom_output_extractor:
            return await custom_output_extractor(output)

        return ItemHelpers.text_message_outputs(output.new_items)

    return run_agent

get_system_prompt async

get_system_prompt(
    run_context: RunContextWrapper[TContext],
) -> str | None

获取该代理的系统提示。

Source code in src/agents/agent.py
async def get_system_prompt(self, run_context: RunContextWrapper[TContext]) -> str | None:
    """Get the system prompt for the agent."""
    if isinstance(self.instructions, str):
        return self.instructions
    elif callable(self.instructions):
        if inspect.iscoroutinefunction(self.instructions):
            return await cast(Awaitable[str], self.instructions(run_context, self))
        else:
            return cast(str, self.instructions(run_context, self))
    elif self.instructions is not None:
        logger.error(f"Instructions must be a string or a function, got {self.instructions}")

    return None

get_mcp_tools async

get_mcp_tools() -> list[Tool]

从MCP服务器获取可用的工具。

Source code in src/agents/agent.py
async def get_mcp_tools(self) -> list[Tool]:
    """Fetches the available tools from the MCP servers."""
    return await MCPUtil.get_all_function_tools(self.mcp_servers)

get_all_tools async

get_all_tools() -> list[Tool]

所有代理工具,包括MCP工具和函数工具。

Source code in src/agents/agent.py
async def get_all_tools(self) -> list[Tool]:
    """All agent tools, including MCP tools and function tools."""
    mcp_tools = await self.get_mcp_tools()
    return mcp_tools + self.tools