跳到主要内容

增强对非OpenAI模型的支持

· 11 min read
Mark Sze
Hrushikesh Dokala

agents

TL;DR

  • AutoGen 已经扩展了与多种基于云的模型提供商的集成,不仅限于OpenAI。
  • 利用来自Gemini、Anthropic、Mistral AI、Together.AI和Groq的模型和平台为您的AutoGen代理提供支持。
  • 利用专门用于聊天、语言、图像和编码的模型。
  • LLM提供商的多样化可以带来成本和弹性优势。

除了最近发布的AutoGen Google Gemini客户端外,新的客户端类如Mistral AIAnthropicTogether.AIGroq,使您能够在AutoGen代理工作流程中利用超过75种不同的大型语言模型。

这些新的客户端类根据每个提供商的独特需求定制了AutoGen的底层消息,并将这种复杂性从开发者手中移除,使他们可以专注于构建他们的AutoGen工作流程。

使用它们就像安装特定客户端库并更新您的LLM配置与相关的api_typemodel一样简单。我们将在下面演示如何使用它们。

随着基于云的推理提供商的到来,社区正在继续增强和构建新的客户端类。所以,请密切关注这个领域,并随时讨论开发另一个项目。

选择的好处

过去12个月里,仅使用最佳模型来克服工作流程中断的LLM不一致性的需求已大幅减少。

这些新类别提供了对来自OpenAI、Google和Anthropic的超大万亿参数模型的访问,继续提供最一致和胜任的代理体验。然而,值得尝试来自Meta、Mistral AI、微软、Qwen等公司的较小模型。也许它们足以完成某个任务或子任务,甚至更适合(例如编码模型)!

使用较小的模型将具有成本效益,但它们也允许您测试可以在本地运行的模型,使您能够确定是否可以完全移除云推理成本,甚至离线运行AutoGen工作流。

在成本方面,这些客户端类还包括特定提供商的令牌成本计算,因此您可以监控工作流程的成本影响。每百万令牌的成本仅为10美分(有些甚至是免费的!),节省的成本可能非常显著。

混合和匹配

Google的Gemini 1.5 Pro模型与Anthropic的Opus或Meta的Llama 3相比如何?

现在你可以快速更改你的代理配置并进行验证。如果你想在一个工作流中运行所有三个,AutoGen能够为每个代理关联特定配置,这意味着你可以为每个代理选择最佳的LLM。

功能

这些客户端类支持文本生成和函数/工具调用的常见需求。

多模态支持,例如图像/音频/视频,是一个积极开发的领域。Google Gemini客户端类可以用于创建多模态代理。

提示

以下是一些使用这些客户端类时的提示:

  • 从最强到最弱 - 首先使用较大的模型并确保您的工作流程正常运行,然后逐步尝试较小的模型。
  • 正确的模型 - 选择一个适合您任务的模型,无论是编码、函数调用、知识还是创意写作。
  • 代理名称 - 这些云提供商不使用消息中的name字段,因此请确保在代理的system_messagedescription字段中使用代理的名称,并指示LLM“扮演”它们。这对于群聊中的“自动”说话者选择尤为重要,因为我们需要引导LLM根据名称选择下一个代理,所以调整select_speaker_message_templateselect_speaker_prompt_templateselect_speaker_auto_multiple_template以提供更多指导。
  • 上下文长度 - 随着对话的延长,模型需要支持更大的上下文长度,注意模型的支持范围,并考虑使用转换消息来管理上下文大小。
  • 提供者参数 - 提供者有一些你可以设置的参数,例如温度、最大标记数、top-k、top-p和安全。详情请参阅AutoGen的API参考文档中的每个客户端类。
  • 提示 - 在指导较小的LLMs完成您需要的任务时,提示工程至关重要。ConversableAgentGroupChatUserProxyAgentAssistantAgent都具有可定制的提示属性,您可以进行调整。这里有一些来自Anthropic(+Library)、Mistral AITogether.AIMeta的提示技巧。
  • 帮助! - 如果您需要帮助或可以帮助改进这些客户端类,请在AutoGen的Discord上联系我们或提交问题

现在该试试它们了。

快速入门

安装

根据您希望使用的模型安装相应的客户端。

pip install autogen-agentchat["mistral"]~=0.2 # for Mistral AI client
pip install autogen-agentchat["anthropic"]~=0.2 # for Anthropic client
pip install autogen-agentchat["together"]~=0.2 # for Together.AI client
pip install autogen-agentchat["groq"]~=0.2 # for Groq client

配置设置

将您的模型配置添加到OAI_CONFIG_LIST中。确保您指定了api_type以初始化相应的客户端(Anthropic、Mistral或Together)。

[
{
"model": "your anthropic model name",
"api_key": "your Anthropic api_key",
"api_type": "anthropic"
},
{
"model": "your mistral model name",
"api_key": "your Mistral AI api_key",
"api_type": "mistral"
},
{
"model": "your together.ai model name",
"api_key": "your Together.AI api_key",
"api_type": "together"
},
{
"model": "your groq model name",
"api_key": "your Groq api_key",
"api_type": "groq"
}
]

用法

[config_list_from_json](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils/#config_list_from_json) 函数从环境变量或json文件加载配置列表。

import autogen
from autogen import AssistantAgent, UserProxyAgent

config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST"
)

构建代理

构建一个用户代理和助理代理之间的简单对话

user_proxy =  UserProxyAgent(
name="User_proxy",
code_execution_config={
"last_n_messages": 2,
"work_dir": "groupchat",
"use_docker": False, # Please set use_docker = True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
},
human_input_mode="ALWAYS",
is_termination_msg=lambda msg: not msg["content"]
)

assistant = AssistantAgent(
name="assistant",
llm_config = {"config_list": config_list}
)

开始聊天


user_proxy.initiate_chat(assistant, message="Write python code to print Hello World!")

注意:要将此设置集成到GroupChat中,请按照教程使用与上述相同的配置。

函数调用

现在,让我们看看Anthropic的Sonnet 3.5是如何能够在单个响应中建议多个函数调用的。

这个例子是一个简单的旅行代理设置,其中包含一个用于函数调用的代理和一个用于执行函数的用户代理。

你会注意到的一点是,Anthropic的模型比OpenAI的更加冗长,通常在回复时会提供思维链或一般性的措辞。因此,我们对functionbot提供了更明确的指令,以确保它不会回复多余的内容。即便如此,它有时候还是控制不住自己!

让我们从设置配置和代理开始。

import os
import autogen
import json
from typing import Literal
from typing_extensions import Annotated

# Anthropic configuration, using api_type='anthropic'
anthropic_llm_config = {
"config_list":
[
{
"api_type": "anthropic",
"model": "claude-3-5-sonnet-20240620",
"api_key": os.getenv("ANTHROPIC_API_KEY"),
"cache_seed": None
}
]
}

# Our functionbot, who will be assigned two functions and
# given directions to use them.
functionbot = autogen.AssistantAgent(
name="functionbot",
system_message="For currency exchange tasks, only use "
"the functions you have been provided with. Do not "
"reply with helpful tips. Once you've recommended functions "
"reply with 'TERMINATE'.",
is_termination_msg=lambda x: x.get("content", "") and (x.get("content", "").rstrip().endswith("TERMINATE") or x.get("content", "") == ""),
llm_config=anthropic_llm_config,
)

# Our user proxy agent, who will be used to manage the customer
# request and conversation with the functionbot, terminating
# when we have the information we need.
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
system_message="You are a travel agent that provides "
"specific information to your customers. Get the "
"information you need and provide a great summary "
"so your customer can have a great trip. If you "
"have the information you need, simply reply with "
"'TERMINATE'.",
is_termination_msg=lambda x: x.get("content", "") and (x.get("content", "").rstrip().endswith("TERMINATE") or x.get("content", "") == ""),
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
)

我们定义这两个函数。

CurrencySymbol = Literal["USD", "EUR"]

def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:
if base_currency == quote_currency:
return 1.0
elif base_currency == "USD" and quote_currency == "EUR":
return 1 / 1.1
elif base_currency == "EUR" and quote_currency == "USD":
return 1.1
else:
raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}")

def get_current_weather(location, unit="fahrenheit"):
"""Get the weather for some location"""
if "chicago" in location.lower():
return json.dumps({"location": "Chicago", "temperature": "13", "unit": unit})
elif "san francisco" in location.lower():
return json.dumps({"location": "San Francisco", "temperature": "55", "unit": unit})
elif "new york" in location.lower():
return json.dumps({"location": "New York", "temperature": "11", "unit": unit})
else:
return json.dumps({"location": location, "temperature": "unknown"})

然后将它们与user_proxy关联以执行,并与functionbot关联以便LLM考虑使用它们。

@user_proxy.register_for_execution()
@functionbot.register_for_llm(description="Currency exchange calculator.")
def currency_calculator(
base_amount: Annotated[float, "Amount of currency in base_currency"],
base_currency: Annotated[CurrencySymbol, "Base currency"] = "USD",
quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
) -> str:
quote_amount = exchange_rate(base_currency, quote_currency) * base_amount
return f"{quote_amount} {quote_currency}"

@user_proxy.register_for_execution()
@functionbot.register_for_llm(description="Weather forecast for US cities.")
def weather_forecast(
location: Annotated[str, "City name"],
) -> str:
weather_details = get_current_weather(location=location)
weather = json.loads(weather_details)
return f"{weather['location']} will be {weather['temperature']} degrees {weather['unit']}"

最后,我们以客户即将前往纽约旅行并希望将欧元兑换成美元的请求开始对话。

重要的是,我们还在使用Anthropic的Sonnet通过summary_method提供摘要。使用summary_prompt,我们引导Sonnet为我们生成一个电子邮件输出。

# start the conversation
res = user_proxy.initiate_chat(
functionbot,
message="My customer wants to travel to New York and "
"they need to exchange 830 EUR to USD. Can you please "
"provide them with a summary of the weather and "
"exchanged currently in USD?",
summary_method="reflection_with_llm",
summary_args={
"summary_prompt": """Summarize the conversation by
providing an email response with the travel information
for the customer addressed as 'Dear Customer'. Do not
provide any additional conversation or apologise,
just provide the relevant information and the email."""
},
)

在对话结束后,我们将打印出总结。

print(f"Here's the LLM summary of the conversation:\n\n{res.summary['content']}")

这是生成的输出结果。

user_proxy (to functionbot):

My customer wants to travel to New York and they need to exchange 830 EUR
to USD. Can you please provide them with a summary of the weather and
exchanged currently in USD?

--------------------------------------------------------------------------------
functionbot (to user_proxy):

Certainly! I'd be happy to help your customer with information about the
weather in New York and the currency exchange from EUR to USD. Let's use
the available tools to get this information.

***** Suggested tool call (toolu_016wBUKVX2TToBaMqmiGvhav): weather_forecast *****
Arguments:
{"location": "New York"}
**********************************************************************************
***** Suggested tool call (toolu_01Nxjeew2BspfKdZ85on3XkP): currency_calculator *****
Arguments:
{"base_amount": 830, "base_currency": "EUR", "quote_currency": "USD"}
*************************************************************************************

--------------------------------------------------------------------------------

>>>>>>>> EXECUTING FUNCTION weather_forecast...

>>>>>>>> EXECUTING FUNCTION currency_calculator...
user_proxy (to functionbot):

user_proxy (to functionbot):

***** Response from calling tool (toolu_016wBUKVX2TToBaMqmiGvhav) *****
New York will be 11 degrees fahrenheit
***********************************************************************

--------------------------------------------------------------------------------
user_proxy (to functionbot):

***** Response from calling tool (toolu_01Nxjeew2BspfKdZ85on3XkP) *****
913.0000000000001 USD
***********************************************************************

--------------------------------------------------------------------------------
functionbot (to user_proxy):

Thank you for providing the information. I can now give your customer a
summary of the weather in New York and the currency exchange from EUR to USD.

Weather in New York:
The current forecast for New York indicates that it will be 11 degrees
Fahrenheit. This is quite cold, so your customer should be prepared with
warm clothing for their trip.

Currency Exchange:
For 830 EUR, your customer will receive approximately 913 USD. This is based
on the current exchange rate provided by our currency calculator.

To summarize:
1. Weather in New York: 11°F (very cold)
2. Currency exchange: 830 EUR = 913 USD

Your customer should pack warm clothes for the cold weather in New York and
can expect to have about 913 USD for their trip after exchanging 830 EUR.

TERMINATE

--------------------------------------------------------------------------------
Here's the LLM summary of the conversation:

Certainly. I'll provide an email response to the customer with the travel
information as requested.

Dear Customer,

We are pleased to provide you with the following information for your
upcoming trip to New York:

Weather Forecast:
The current forecast for New York indicates a temperature of 11 degrees
Fahrenheit. Please be prepared for very cold weather and pack appropriate
warm clothing.

Currency Exchange:
We have calculated the currency exchange for you. Your 830 EUR will be
equivalent to approximately 913 USD at the current exchange rate.

We hope this information helps you prepare for your trip to New York. Have
a safe and enjoyable journey!

Best regards,
Travel Assistance Team

因此,我们可以看到Anthropic的Sonnet如何在单个响应中建议多个工具,并由AutoGen执行它们并将结果返回给Sonnet。然后,Sonnet会生成一个很好的电子邮件摘要,该摘要可以作为与客户继续现实生活对话的基础。

更多提示和技巧

对于Anthropic的Sonnet和Mistral的Mixtral之间的一场有趣的国际象棋游戏,我们整理了一个示例笔记本,展示了一些与非OpenAI LLM合作的技巧和窍门。查看笔记本