跳至内容

工具

工具 module-attribute

一个可以在代理中使用的工具。

FunctionToolResult dataclass

Source code in src/agents/tool.py
@dataclass
class FunctionToolResult:
    tool: FunctionTool
    """The tool that was run."""

    output: Any
    """The output of the tool."""

    run_item: RunItem
    """The run item that was produced as a result of the tool call."""

工具 instance-attribute

运行的工具。

输出 instance-attribute

output: Any

该工具的输出。

运行项 instance-attribute

run_item: RunItem

作为工具调用结果产生的运行项。

FunctionTool dataclass

一个封装函数的工具。在大多数情况下,您应该使用function_tool辅助工具来创建FunctionTool,因为它们可以让您轻松地封装Python函数。

Source code in src/agents/tool.py
@dataclass
class FunctionTool:
    """A tool that wraps a function. In most cases, you should use  the `function_tool` helpers to
    create a FunctionTool, as they let you easily wrap a Python function.
    """

    name: str
    """The name of the tool, as shown to the LLM. Generally the name of the function."""

    description: str
    """A description of the tool, as shown to the LLM."""

    params_json_schema: dict[str, Any]
    """The JSON schema for the tool's parameters."""

    on_invoke_tool: Callable[[RunContextWrapper[Any], str], Awaitable[Any]]
    """A function that invokes the tool with the given context and parameters. The params passed
    are:
    1. The tool run context.
    2. The arguments from the LLM, as a JSON string.

    You must return a string representation of the tool output, or something we can call `str()` on.
    In case of errors, you can either raise an Exception (which will cause the run to fail) or
    return a string error message (which will be sent back to the LLM).
    """

    strict_json_schema: bool = True
    """Whether the JSON schema is in strict mode. We **strongly** recommend setting this to True,
    as it increases the likelihood of correct JSON input."""

名称 instance-attribute

name: str

工具名称,展示给LLM的名称。通常是函数名。

描述 instance-attribute

description: str

工具的说明,展示给LLM的内容。

params_json_schema instance-attribute

params_json_schema: dict[str, Any]

该工具的JSON参数模式。

on_invoke_tool instance-attribute

on_invoke_tool: Callable[
    [RunContextWrapper[Any], str], Awaitable[Any]
]

一个调用工具并传入给定上下文和参数的函数。传入的参数包括: 1. 工具运行的上下文。 2. 来自LLM的参数,以JSON字符串形式传递。

你必须返回工具输出的字符串表示形式,或者我们可以对其调用str()的内容。 如果出现错误,你可以抛出异常(这将导致运行失败)或返回一个字符串错误消息(该消息将被发送回LLM)。

strict_json_schema class-attribute instance-attribute

strict_json_schema: bool = True

JSON模式是否处于严格模式。我们强烈建议将此设置为True,因为它增加了正确JSON输入的可能性。

文件搜索工具 dataclass

一个托管工具,允许LLM在向量存储中进行搜索。目前仅支持使用OpenAI模型,通过Responses API实现。

Source code in src/agents/tool.py
@dataclass
class FileSearchTool:
    """A hosted tool that lets the LLM search through a vector store. Currently only supported with
    OpenAI models, using the Responses API.
    """

    vector_store_ids: list[str]
    """The IDs of the vector stores to search."""

    max_num_results: int | None = None
    """The maximum number of results to return."""

    include_search_results: bool = False
    """Whether to include the search results in the output produced by the LLM."""

    ranking_options: RankingOptions | None = None
    """Ranking options for search."""

    filters: Filters | None = None
    """A filter to apply based on file attributes."""

    @property
    def name(self):
        return "file_search"

vector_store_ids instance-attribute

vector_store_ids: list[str]

要搜索的向量存储ID。

max_num_results class-attribute instance-attribute

max_num_results: int | None = None

返回的最大结果数量。

include_search_results class-attribute instance-attribute

include_search_results: bool = False

是否在LLM生成的输出中包含搜索结果。

ranking_options class-attribute instance-attribute

ranking_options: RankingOptions | None = None

搜索结果的排序选项。

过滤器 class-attribute instance-attribute

filters: Filters | None = None

基于文件属性应用的过滤器。

网页搜索工具 dataclass

一个托管工具,允许LLM进行网络搜索。目前仅支持OpenAI模型,使用Responses API。

Source code in src/agents/tool.py
@dataclass
class WebSearchTool:
    """A hosted tool that lets the LLM search the web. Currently only supported with OpenAI models,
    using the Responses API.
    """

    user_location: UserLocation | None = None
    """Optional location for the search. Lets you customize results to be relevant to a location."""

    search_context_size: Literal["low", "medium", "high"] = "medium"
    """The amount of context to use for the search."""

    @property
    def name(self):
        return "web_search_preview"

用户位置 class-attribute instance-attribute

user_location: UserLocation | None = None

可选的搜索位置。允许您自定义结果以与某个位置相关。

search_context_size class-attribute instance-attribute

search_context_size: Literal["low", "medium", "high"] = (
    "medium"
)

用于搜索的上下文数量。

计算机工具 dataclass

一个托管工具,让LLM能够控制计算机。

Source code in src/agents/tool.py
@dataclass
class ComputerTool:
    """A hosted tool that lets the LLM control a computer."""

    computer: Computer | AsyncComputer
    """The computer implementation, which describes the environment and dimensions of the computer,
    as well as implements the computer actions like click, screenshot, etc.
    """

    @property
    def name(self):
        return "computer_use_preview"

计算机 instance-attribute

computer: Computer | AsyncComputer

计算机实现部分,用于描述计算机的环境和维度,并实现点击、截图等计算机操作。

default_tool_error_function

default_tool_error_function(
    ctx: RunContextWrapper[Any], error: Exception
) -> str

默认的工具错误函数,仅返回通用错误信息。

Source code in src/agents/tool.py
def default_tool_error_function(ctx: RunContextWrapper[Any], error: Exception) -> str:
    """The default tool error function, which just returns a generic error message."""
    return f"An error occurred while running the tool. Please try again. Error: {str(error)}"

function_tool

function_tool(
    func: ToolFunction[...],
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction | None = None,
    strict_mode: bool = True,
) -> FunctionTool
function_tool(
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction | None = None,
    strict_mode: bool = True,
) -> Callable[[ToolFunction[...]], FunctionTool]
function_tool(
    func: ToolFunction[...] | None = None,
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction
    | None = default_tool_error_function,
    strict_mode: bool = True,
) -> (
    FunctionTool
    | Callable[[ToolFunction[...]], FunctionTool]
)

用于从函数创建FunctionTool的装饰器。默认情况下,我们将: 1. 解析函数签名以创建工具参数的JSON模式。 2. 使用函数的文档字符串填充工具描述。 3. 使用函数的文档字符串填充参数描述。 文档字符串风格会被自动检测,但您可以覆盖它。

如果函数将RunContextWrapper作为第一个参数,它必须与使用该工具的代理的上下文类型匹配。

参数:

名称 类型 描述 默认值
func ToolFunction[...] | None

需要封装的函数。

None
name_override str | None

如果提供,将使用此名称作为工具名称,而非函数名称。

None
description_override str | None

如果提供,将使用此描述作为工具说明,而非函数的文档字符串。

None
docstring_style DocstringStyle | None

如果提供,则使用此样式作为工具的文档字符串。如果未提供,我们将尝试自动检测样式。

None
use_docstring_info bool

如果为True,则使用函数的文档字符串来填充工具的 描述和参数描述。

True
failure_error_function ToolErrorFunction | None

如果提供此函数,当工具调用失败时将使用它生成错误消息。该错误消息会发送给LLM。如果传入None,则不会发送错误消息,而是会抛出异常。

default_tool_error_function
strict_mode bool

是否启用工具的JSON模式严格模式。我们强烈建议将其设置为True,因为这会增加正确JSON输入的可能性。如果设为False,则允许非严格的JSON模式。例如,如果参数有默认值,则该参数将变为可选,允许额外属性等。详情请参阅:https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#supported-schemas

True
Source code in src/agents/tool.py
def function_tool(
    func: ToolFunction[...] | None = None,
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction | None = default_tool_error_function,
    strict_mode: bool = True,
) -> FunctionTool | Callable[[ToolFunction[...]], FunctionTool]:
    """
    Decorator to create a FunctionTool from a function. By default, we will:
    1. Parse the function signature to create a JSON schema for the tool's parameters.
    2. Use the function's docstring to populate the tool's description.
    3. Use the function's docstring to populate argument descriptions.
    The docstring style is detected automatically, but you can override it.

    If the function takes a `RunContextWrapper` as the first argument, it *must* match the
    context type of the agent that uses the tool.

    Args:
        func: The function to wrap.
        name_override: If provided, use this name for the tool instead of the function's name.
        description_override: If provided, use this description for the tool instead of the
            function's docstring.
        docstring_style: If provided, use this style for the tool's docstring. If not provided,
            we will attempt to auto-detect the style.
        use_docstring_info: If True, use the function's docstring to populate the tool's
            description and argument descriptions.
        failure_error_function: If provided, use this function to generate an error message when
            the tool call fails. The error message is sent to the LLM. If you pass None, then no
            error message will be sent and instead an Exception will be raised.
        strict_mode: Whether to enable strict mode for the tool's JSON schema. We *strongly*
            recommend setting this to True, as it increases the likelihood of correct JSON input.
            If False, it allows non-strict JSON schemas. For example, if a parameter has a default
            value, it will be optional, additional properties are allowed, etc. See here for more:
            https://platform.openai.com/docs/guides/structured-outputs?api-mode=responses#supported-schemas
    """

    def _create_function_tool(the_func: ToolFunction[...]) -> FunctionTool:
        schema = function_schema(
            func=the_func,
            name_override=name_override,
            description_override=description_override,
            docstring_style=docstring_style,
            use_docstring_info=use_docstring_info,
            strict_json_schema=strict_mode,
        )

        async def _on_invoke_tool_impl(ctx: RunContextWrapper[Any], input: str) -> Any:
            try:
                json_data: dict[str, Any] = json.loads(input) if input else {}
            except Exception as e:
                if _debug.DONT_LOG_TOOL_DATA:
                    logger.debug(f"Invalid JSON input for tool {schema.name}")
                else:
                    logger.debug(f"Invalid JSON input for tool {schema.name}: {input}")
                raise ModelBehaviorError(
                    f"Invalid JSON input for tool {schema.name}: {input}"
                ) from e

            if _debug.DONT_LOG_TOOL_DATA:
                logger.debug(f"Invoking tool {schema.name}")
            else:
                logger.debug(f"Invoking tool {schema.name} with input {input}")

            try:
                parsed = (
                    schema.params_pydantic_model(**json_data)
                    if json_data
                    else schema.params_pydantic_model()
                )
            except ValidationError as e:
                raise ModelBehaviorError(f"Invalid JSON input for tool {schema.name}: {e}") from e

            args, kwargs_dict = schema.to_call_args(parsed)

            if not _debug.DONT_LOG_TOOL_DATA:
                logger.debug(f"Tool call args: {args}, kwargs: {kwargs_dict}")

            if inspect.iscoroutinefunction(the_func):
                if schema.takes_context:
                    result = await the_func(ctx, *args, **kwargs_dict)
                else:
                    result = await the_func(*args, **kwargs_dict)
            else:
                if schema.takes_context:
                    result = the_func(ctx, *args, **kwargs_dict)
                else:
                    result = the_func(*args, **kwargs_dict)

            if _debug.DONT_LOG_TOOL_DATA:
                logger.debug(f"Tool {schema.name} completed.")
            else:
                logger.debug(f"Tool {schema.name} returned {result}")

            return result

        async def _on_invoke_tool(ctx: RunContextWrapper[Any], input: str) -> Any:
            try:
                return await _on_invoke_tool_impl(ctx, input)
            except Exception as e:
                if failure_error_function is None:
                    raise

                result = failure_error_function(ctx, e)
                if inspect.isawaitable(result):
                    return await result

                _error_tracing.attach_error_to_current_span(
                    SpanError(
                        message="Error running tool (non-fatal)",
                        data={
                            "tool_name": schema.name,
                            "error": str(e),
                        },
                    )
                )
                return result

        return FunctionTool(
            name=schema.name,
            description=schema.description or "",
            params_json_schema=schema.params_json_schema,
            on_invoke_tool=_on_invoke_tool,
            strict_json_schema=strict_mode,
        )

    # If func is actually a callable, we were used as @function_tool with no parentheses
    if callable(func):
        return _create_function_tool(func)

    # Otherwise, we were used as @function_tool(...), so return a decorator
    def decorator(real_func: ToolFunction[...]) -> FunctionTool:
        return _create_function_tool(real_func)

    return decorator