使用AutoGen进行追踪#

作者:  Open on GitHub Open on GitHubOpen on GitHub

AutoGen 提供了由 LLM、工具或人类驱动的可对话代理,这些代理可以通过自动聊天共同执行任务。该框架通过多代理对话允许工具使用和人类参与。 请在此处找到有关此功能的文档 here

学习目标 - 完成本教程后,您应该能够:

  • 追踪LLM(OpenAI)调用并可视化您的应用程序的追踪路径。

需求#

AutoGen 需要 Python>=3.8。要运行此笔记本示例,请安装所需的依赖项:

%%capture --no-stderr
%pip install -r ./requirements.txt

设置您的API端点#

你可以从示例文件OAI_CONFIG_LIST.json.example创建名为OAI_CONFIG_LIST.json的配置文件。

以下代码使用config_list_from_json函数从环境变量或json文件加载配置列表。

import autogen

# please ensure you have a json config file
env_or_file = "OAI_CONFIG_LIST.json"

# filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.

# gpt4
# config_list = autogen.config_list_from_json(
#     env_or_file,
#     filter_dict={
#         "model": ["gpt-4", "gpt-4-0314", "gpt4", "gpt-4-32k", "gpt-4-32k-0314", "gpt-4-32k-v0314"],
#     },
# )

# gpt35
config_list = autogen.config_list_from_json(
    env_or_file,
    filter_dict={
        "model": {
            "gpt-35-turbo",
            "gpt-3.5-turbo",
            "gpt-3.5-turbo-16k",
            "gpt-3.5-turbo-0301",
            "chatgpt-35-turbo-0301",
            "gpt-35-turbo-v0301",
        },
    },
)

构建代理#

import os

os.environ["AUTOGEN_USE_DOCKER"] = "False"

llm_config = {"config_list": config_list, "cache_seed": 42}
user_proxy = autogen.UserProxyAgent(
    name="User_proxy",
    system_message="A human admin.",
    code_execution_config={
        "last_n_messages": 2,
        "work_dir": "groupchat",
        "use_docker": False,
    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    human_input_mode="TERMINATE",
)
coder = autogen.AssistantAgent(
    name="Coder",
    llm_config=llm_config,
)
pm = autogen.AssistantAgent(
    name="Product_manager",
    system_message="Creative in software product ideas.",
    llm_config=llm_config,
)
groupchat = autogen.GroupChat(agents=[user_proxy, coder, pm], messages=[], max_round=12)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

使用promptflow trace开始聊天#

from promptflow.tracing import start_trace

# start a trace session, and print a url for user to check trace
# traces will be collected into below collection name
start_trace(collection="autogen-groupchat")

打开你在start_trace输出中获得的URL,当运行以下代码时,你将能够在UI中看到新的跟踪。

from opentelemetry import trace
import json


tracer = trace.get_tracer("my_tracer")
# Create a root span
with tracer.start_as_current_span("autogen") as span:
    message = "Find a latest paper about gpt-4 on arxiv and find its potential applications in software."
    user_proxy.initiate_chat(
        manager,
        message=message,
        clear_history=True,
    )
    span.set_attribute("custom", "custom attribute value")
    # recommend to store inputs and outputs as events
    span.add_event(
        "promptflow.function.inputs", {"payload": json.dumps(dict(message=message))}
    )
    span.add_event(
        "promptflow.function.output", {"payload": json.dumps(user_proxy.last_message())}
    )
# type exit to terminate the chat

下一步#

到目前为止,您已经成功地使用提示流在您的应用程序中追踪了LLM调用。

你可以查看更多示例:

  • Trace your flow: 使用 promptflow @trace 结构化地追踪您的应用程序,并通过批量运行对其进行评估。