autogen_ext.models.openai#
- class OpenAIChatCompletionClient(**kwargs: Unpack)[源代码]#
基类:
BaseOpenAIChatCompletionClient
,Component
[OpenAIClientConfigurationConfigModel
]用于OpenAI托管模型的聊天完成客户端。
要使用此客户端,您必须安装 openai extra:
pip install "autogen-ext[openai]"
您还可以将此客户端用于OpenAI兼容的ChatCompletion端点。 使用此客户端用于非OpenAI模型尚未经过测试或保证。
对于非OpenAI模型,请先查看我们的社区扩展以获取额外的模型客户端。
- Parameters:
model (str) - 使用哪个OpenAI模型。
api_key (可选, str) – 使用的API密钥。如果在环境变量中未找到‘OPENAI_API_KEY’,则必须提供。
organization (可选, str) – 使用的组织ID。
base_url (可选, str) – 使用的基础URL。如果模型未托管在OpenAI上,则必填。
timeout – (可选, float): 请求的超时时间,单位为秒。
max_retries (可选, int) – 尝试的最大重试次数。
model_info (optional, ModelInfo) – 模型的能力。如果模型名称不是有效的OpenAI模型,则此项为必填。
frequency_penalty (可选, float)
logit_bias – (可选的, dict[str, int]):
max_tokens (可选, int)
n (可选, int)
presence_penalty (可选, float)
response_format (可选, 字面量["json_object", "text"] | pydantic.BaseModel)
seed (可选, int)
temperature (可选, float)
top_p (可选, float)
用户 (可选, str)
default_headers (optional, dict[str, str]) – 自定义头;用于认证或其他自定义需求。
add_name_prefixes (可选, bool) – 是否将 source 值添加到每个
UserMessage
内容的前面。例如,“这是内容”变为“审核者说:这是内容。”这对于不支持消息中 name 字段的模型可能很有用。默认为 False。stream_options (可选, dict) – 用于流式传输的额外选项。目前仅支持 include_usage。
示例
以下代码片段展示了如何使用客户端与OpenAI模型:
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_core.models import UserMessage openai_client = OpenAIChatCompletionClient( model="gpt-4o-2024-08-06", # api_key="sk-...", # Optional if you have an OPENAI_API_KEY environment variable set. ) result = await openai_client.create([UserMessage(content="What is the capital of France?", source="user")]) # type: ignore print(result)
要使用非OpenAI模型的客户端,您需要提供模型的基本URL和模型信息。 例如,要使用Ollama,您可以使用以下代码片段:
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_core.models import ModelFamily custom_model_client = OpenAIChatCompletionClient( model="deepseek-r1:1.5b", base_url="http://localhost:11434/v1", api_key="placeholder", model_info={ "vision": False, "function_calling": False, "json_output": False, "family": ModelFamily.R1, }, )
要使用结构化输出以及函数调用,您可以使用以下代码片段:
import asyncio from typing import Literal from autogen_core.models import ( AssistantMessage, FunctionExecutionResult, FunctionExecutionResultMessage, SystemMessage, UserMessage, ) from autogen_core.tools import FunctionTool from autogen_ext.models.openai import OpenAIChatCompletionClient from pydantic import BaseModel # Define the structured output format. class AgentResponse(BaseModel): thoughts: str response: Literal["happy", "sad", "neutral"] # Define the function to be called as a tool. def sentiment_analysis(text: str) -> str: """Given a text, return the sentiment.""" return "happy" if "happy" in text else "sad" if "sad" in text else "neutral" # Create a FunctionTool instance with `strict=True`, # which is required for structured output mode. tool = FunctionTool(sentiment_analysis, description="Sentiment Analysis", strict=True) # Create an OpenAIChatCompletionClient instance. model_client = OpenAIChatCompletionClient( model="gpt-4o-mini", response_format=AgentResponse, # type: ignore ) async def main() -> None: # Generate a response using the tool. response1 = await model_client.create( messages=[ SystemMessage(content="Analyze input text sentiment using the tool provided."), UserMessage(content="I am happy.", source="user"), ], tools=[tool], ) print(response1.content) # Should be a list of tool calls. # [FunctionCall(name="sentiment_analysis", arguments={"text": "I am happy."}, ...)] assert isinstance(response1.content, list) response2 = await model_client.create( messages=[ SystemMessage(content="Analyze input text sentiment using the tool provided."), UserMessage(content="I am happy.", source="user"), AssistantMessage(content=response1.content, source="assistant"), FunctionExecutionResultMessage( content=[FunctionExecutionResult(content="happy", call_id=response1.content[0].id, is_error=False, name="sentiment_analysis")] ), ], ) print(response2.content) # Should be a structured output. # {"thoughts": "The user is happy.", "response": "happy"} asyncio.run(main())
要从配置中加载客户端,可以使用 load_component 方法:
from autogen_core.models import ChatCompletionClient config = { "provider": "OpenAIChatCompletionClient", "config": {"model": "gpt-4o", "api_key": "REPLACE_WITH_YOUR_API_KEY"}, } client = ChatCompletionClient.load_component(config)
要查看可用配置选项的完整列表,请参见
OpenAIClientConfigurationConfigModel
类。- component_type: ClassVar[ComponentType] = 'model'#
组件的逻辑类型。
- component_config_schema#
- component_provider_override: ClassVar[str | 无] = 'autogen_ext.models.openai.OpenAIChatCompletionClient'#
覆盖组件的提供商字符串。这应用于防止内部模块名称成为模块名称的一部分。
- _to_config() OpenAIClientConfigurationConfigModel [源代码]#
导出配置,该配置将用于创建一个与此实例配置相匹配的组件新实例。
- Returns:
T – 组件的配置。
- classmethod _from_config(config: OpenAIClientConfigurationConfigModel) 自我 [源代码]#
从配置对象创建组件的新实例。
- Parameters:
config (T) – 配置对象。
- Returns:
Self – 组件的新实例。
- class AzureOpenAIChatCompletionClient(**kwargs: Unpack)[源代码]#
基础:
BaseOpenAIChatCompletionClient
,Component
[AzureOpenAIClientConfigurationConfigModel
]用于Azure OpenAI托管模型的聊天完成客户端。
- Parameters:
model (str) - 使用哪个OpenAI模型。
azure_endpoint (str) – Azure模型的端点。对于Azure模型是必需的。
azure_deployment (str) – Azure模型的部署名称。Azure模型必需。
api_version (str) – 要使用的API版本。Azure模型必需。
azure_ad_token (str) – 要使用的Azure AD令牌。提供此令牌或 azure_ad_token_provider 用于基于令牌的认证。
azure_ad_token_provider (可选, Callable[[], Awaitable[str]] | AzureTokenProvider) – 用于Azure AD的token提供者。提供此参数或azure_ad_token以进行基于token的认证。
api_key (可选, str) – 要使用的API密钥,如果您使用的是基于密钥的身份验证,请使用此选项。如果您使用的是基于Azure AD令牌的身份验证或AZURE_OPENAI_API_KEY环境变量,则此选项为可选。
timeout – (可选, float): 请求的超时时间,单位为秒。
max_retries (可选, int) – 尝试的最大重试次数。
model_info (optional, ModelInfo) – 模型的能力。如果模型名称不是有效的OpenAI模型,则此项为必填。
frequency_penalty (可选, float)
logit_bias – (可选的, dict[str, int]):
max_tokens (可选, int)
n (可选, int)
presence_penalty (可选, float)
response_format (可选, 字面量["json_object", "text"])
seed (可选, int)
temperature (可选, float)
top_p (可选, float)
用户 (可选, str)
default_headers (optional, dict[str, str]) – 自定义头;用于认证或其他自定义需求。
要使用此客户端,您必须安装azure和openai扩展:
pip install "autogen-ext[openai,azure]"
要使用该客户端,您需要提供您的部署ID、Azure认知服务端点、API版本以及模型功能。 对于身份验证,您可以提供API密钥或Azure Active Directory(AAD)令牌凭据。
以下代码片段展示了如何使用AAD进行身份验证。 所使用的身份必须被分配Cognitive Services OpenAI User角色。
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient from azure.identity import DefaultAzureCredential, get_bearer_token_provider # Create the token provider token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") az_model_client = AzureOpenAIChatCompletionClient( azure_deployment="{your-azure-deployment}", model="{deployed-model, such as 'gpt-4o'}", api_version="2024-06-01", azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/", azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication. # api_key="sk-...", # For key-based authentication. `AZURE_OPENAI_API_KEY` environment variable can also be used instead. )
要从配置加载使用基于身份验证的客户端,你可以使用load_component方法:
from autogen_core.models import ChatCompletionClient config = { "provider": "AzureOpenAIChatCompletionClient", "config": { "model": "gpt-4o-2024-05-13", "azure_endpoint": "https://{your-custom-endpoint}.openai.azure.com/", "azure_deployment": "{your-azure-deployment}", "api_version": "2024-06-01", "azure_ad_token_provider": { "provider": "autogen_ext.auth.azure.AzureTokenProvider", "config": { "provider_kind": "DefaultAzureCredential", "scopes": ["https://cognitiveservices.azure.com/.default"], }, }, }, } client = ChatCompletionClient.load_component(config)
要查看可用配置选项的完整列表,请参阅
AzureOpenAIClientConfigurationConfigModel
类。注意
目前仅支持DefaultAzureCredential,并且没有传递任何额外的参数。
查看这里了解如何直接使用Azure客户端或获取更多信息。
- component_type: ClassVar[ComponentType] = 'model'#
组件的逻辑类型。
- component_config_schema#
- component_provider_override: ClassVar[str | 无] = 'autogen_ext.models.openai.AzureOpenAIChatCompletionClient'#
覆盖组件的提供商字符串。这应用于防止内部模块名称成为模块名称的一部分。
- _to_config() AzureOpenAIClientConfigurationConfigModel [源代码]#
导出配置,该配置将用于创建一个与此实例配置相匹配的组件新实例。
- Returns:
T – 组件的配置。
- classmethod _from_config(config: AzureOpenAIClientConfigurationConfigModel) 自我 [源代码]#
从配置对象创建组件的新实例。
- Parameters:
config (T) – 配置对象。
- Returns:
Self – 组件的新实例。
- class BaseOpenAIChatCompletionClient(client: AsyncOpenAI | AsyncAzureOpenAI, *, create_args: 字典[str, 任何], model_capabilities: ModelCapabilities | 无 = None, model_info: ModelInfo | 无 = None, add_name_prefixes: bool = False)[源代码]#
-
- async create(messages: Sequence[已注解[系统消息 | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[工具 | 工具模式] = [], json_output: bool | 无 = None, extra_create_args: 映射[str, 任何] = {}, cancellation_token: CancellationToken | 无 = None) CreateResult [源代码]#
- async create_stream(messages: Sequence[已注解[系统消息 | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[工具 | 工具模式] = [], json_output: bool | 无 = None, extra_create_args: 映射[str, 任何] = {}, cancellation_token: CancellationToken | 无 = None, max_consecutive_empty_chunk_tolerance: int = 0) AsyncGenerator[str | CreateResult, 无] [源代码]#
创建一个AsyncGenerator,它将基于提供的消息和工具生成一系列聊天补全。
- Parameters:
messages (Sequence[LLMMessage]) – 要处理的消息序列。
工具 (Sequence[Tool | ToolSchema], 可选) – 用于完成任务的工具序列。默认为 []。
json_output (可选[bool], 可选) – 如果为True,输出将为JSON格式。默认为None。
extra_create_args (Mapping[str, Any], optional) – 创建过程的额外参数。默认为 {}。
cancellation_token (可选[CancellationToken], 可选) – 用于取消操作的令牌。默认为 None。
max_consecutive_empty_chunk_tolerance (int) – [已弃用] 在引发 ValueError 之前允许的最大连续空块数。似乎只有在使用 AzureOpenAIChatCompletionClient 时才需要设置。默认值为 0。此参数已弃用,空块将被跳过。
- Yields:
AsyncGenerator[Union[str, CreateResult], None] – 一个生成器,在完成结果产生时返回这些结果。
在流式传输中,默认行为是不返回令牌使用计数。请参阅:[OpenAI API 参考以获取可能的参数](https://platform.openai.com/docs/api-reference/chat/create)。 然而 extra_create_args={“stream_options”: {“include_usage”: True}} 将会(如果所访问的API支持)返回最后一个块,其中使用计数设置为包含提示和完成令牌计数的RequestUsage对象, 所有前面的块的使用计数将为None。请参阅:[stream_options](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options)。
- Other examples of OPENAI supported arguments that can be included in extra_create_args:
temperature (float): 控制输出的随机性。较高的值(例如,0.8)会使输出更加随机,而较低的值(例如,0.2)则使其更加集中和确定。
max_tokens (int): 生成完成中的最大令牌数。
top_p (float): 温度采样的替代方案,称为核心采样,模型会考虑具有 top_p 概率质量的令牌的结果。
frequency_penalty (float): 一个介于 -2.0 和 2.0 之间的值,用于根据文本中已有频率对新生成的 token 进行惩罚,从而减少重复短语的可能性。
presence_penalty (float): 值为-2.0到2.0之间的一个数值,用于根据新词是否已在文本中出现来对其进行惩罚,从而鼓励模型讨论新话题。
- actual_usage() RequestUsage [源代码]#
- total_usage() RequestUsage [源代码]#
- count_tokens(messages: Sequence[已注解[系统消息 | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[工具 | 工具模式] = []) int [源代码]#
- remaining_tokens(messages: Sequence[已注解[系统消息 | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[工具 | 工具模式] = []) int [源代码]#
- property capabilities: ModelCapabilities#
- pydantic model AzureOpenAIClientConfigurationConfigModel[源代码]#
基础:
BaseOpenAIClientConfigurationConfigModel
Show JSON schema
{ "title": "AzureOpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "azure_endpoint": { "title": "Azure Endpoint", "type": "string" }, "azure_deployment": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Azure Deployment" }, "api_version": { "title": "Api Version", "type": "string" }, "azure_ad_token": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Azure Ad Token" }, "azure_ad_token_provider": { "anyOf": [ { "$ref": "#/$defs/ComponentModel" }, { "type": "null" } ], "default": null } }, "$defs": { "ComponentModel": { "description": "Model class for a component. Contains all information required to instantiate a component.", "properties": { "provider": { "title": "Provider", "type": "string" }, "component_type": { "anyOf": [ { "enum": [ "model", "agent", "tool", "termination", "token_provider" ], "type": "string" }, { "type": "string" }, { "type": "null" } ], "default": null, "title": "Component Type" }, "version": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Version" }, "component_version": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Component Version" }, "description": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Description" }, "label": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Label" }, "config": { "title": "Config", "type": "object" } }, "required": [ "provider", "config" ], "title": "ComponentModel", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-4o", "o1", "o3", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3.5-haiku", "claude-3.5-sonnet", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" } }, "required": [ "vision", "function_calling", "json_output", "family" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object" ], "title": "Type", "type": "string" } }, "required": [ "type" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model", "azure_endpoint", "api_version" ] }
- Fields:
api_version (str)
azure_ad_token (str | None)
azure_ad_token_provider (autogen_core._component_config.ComponentModel | None)
azure_deployment (str | None)
azure_endpoint (str)
- field azure_ad_token_provider: ComponentModel | 无 = None#
- pydantic model OpenAIClientConfigurationConfigModel[源代码]#
基础:
BaseOpenAIClientConfigurationConfigModel
Show JSON schema
{ "title": "OpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "organization": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Organization" }, "base_url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Base Url" } }, "$defs": { "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-4o", "o1", "o3", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3.5-haiku", "claude-3.5-sonnet", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" } }, "required": [ "vision", "function_calling", "json_output", "family" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object" ], "title": "Type", "type": "string" } }, "required": [ "type" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model" ] }
- Fields:
base_url (str | None)
organization (str | None)
- pydantic model BaseOpenAIClientConfigurationConfigModel[源代码]#
-
Show JSON schema
{ "title": "BaseOpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" } }, "$defs": { "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo is a dictionary that contains information about a model's properties.\nIt is expected to be used in the model_info property of a model client.\n\nWe are expecting this to grow over time as we add more features.", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-4o", "o1", "o3", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "claude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3.5-haiku", "claude-3.5-sonnet", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" } }, "required": [ "vision", "function_calling", "json_output", "family" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object" ], "title": "Type", "type": "string" } }, "required": [ "type" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model" ] }
- Fields:
add_name_prefixes (bool | None)
api_key (str | None)
default_headers (Dict[str, str] | None)
max_retries (int | None)
model (str)
model_capabilities (autogen_core.models._model_client.ModelCapabilities | None)
model_info (autogen_core.models._model_client.ModelInfo | None)
timeout (float | None)
- field model_capabilities: ModelCapabilities | 无 = None#
- pydantic model CreateArgumentsConfigModel[源代码]#
基础:
BaseModel
Show JSON schema
{ "title": "CreateArgumentsConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null } }, "$defs": { "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object" ], "title": "Type", "type": "string" } }, "required": [ "type" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } } }
- Fields:
frequency_penalty (float | None)
logit_bias (Dict[str, int] | None)
max_tokens (int | None)
n (int | None)
presence_penalty (float | None)
response_format (autogen_ext.models.openai.config.ResponseFormat | None)
seed (int | None)
stop (str | List[str] | None)
stream_options (autogen_ext.models.openai.config.StreamOptions | None)
temperature (float | None)
top_p (float | None)
user (str | None)