与prompty聊天#

作者:  Open on GitHub Open on GitHubOpen on GitHub

学习目标 - 完成本教程后,您应该能够:

  • 使用prompty编写LLM应用程序并可视化应用程序的跟踪。

  • 了解如何使用prompty处理聊天对话

  • 针对多行数据批量运行提示。

0. 安装依赖包#

%%capture --no-stderr
%pip install promptflow-devkit

1. 提示#

Prompty 是一个带有 .prompty 扩展名的文件,用于开发提示模板。 Prompty 资源是一个带有修改过的前置内容的 Markdown 文件。 前置内容采用 yaml 格式,包含多个元数据字段,这些字段定义了模型的配置和 prompty 的预期输入。

with open("chat.prompty") as fin:
    print(fin.read())

创建必要的连接#

连接帮助安全地存储和管理与LLM和其他外部工具(例如Azure内容安全)交互所需的密钥或其他敏感凭证。

上述提示内部使用了连接 open_ai_connection,如果之前没有添加过,我们需要设置这个连接。创建后,它会被存储在本地数据库中,并可以在任何流程中使用。

按照此说明准备您的Azure OpenAI资源,并获取您的api_key(如果您还没有)。

from promptflow.client import PFClient
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection

# client can help manage your runs and connections.
pf = PFClient()
try:
    conn_name = "open_ai_connection"
    conn = pf.connections.get(name=conn_name)
    print("using existing connection")
except:
    # Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure OpenAI resource.
    connection = AzureOpenAIConnection(
        name=conn_name,
        api_key="<your_AOAI_key>",
        api_base="<your_AOAI_endpoint>",
        api_type="azure",
    )

    # use this if you have an existing OpenAI account
    # connection = OpenAIConnection(
    #     name=conn_name,
    #     api_key="<user-input>",
    # )

    conn = pf.connections.create_or_update(connection)
    print("successfully created connection")

print(conn)

将prompty作为函数执行#

from promptflow.core import Prompty

# load prompty as a flow
f = Prompty.load("chat.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(question=question)
result

你可以使用AzureOpenAIModelConfigurationOpenAIModelConfiguration来覆盖连接。

from promptflow.core import AzureOpenAIModelConfiguration, OpenAIModelConfiguration


# override configuration with created connection in AzureOpenAIModelConfiguration
configuration = AzureOpenAIModelConfiguration(
    connection="open_ai_connection", azure_deployment="gpt-4o"
)

# override openai connection with OpenAIModelConfiguration
# configuration = OpenAIModelConfiguration(
#     connection=connection,
#     model="gpt-3.5-turbo"
# )

override_model = {
    "configuration": configuration,
}

# load prompty as a flow
f = Prompty.load("chat.prompty", model=override_model)
# execute the flow as function
question = "What is the capital of France?"
result = f(question=question)
result

使用start_trace可视化跟踪#

from promptflow.tracing import start_trace

# start a trace session, and print a url for user to check trace
start_trace()

重新运行下面的单元格将在跟踪UI中收集一个跟踪。

# rerun the function, which will be recorded in the trace
result = f(question=question)
result

评估结果#

在这个例子中,我们将使用一个提示来确定聊天对话中是否包含助手的道歉。

eval_prompty = "../eval-apology/apology.prompty"

with open(eval_prompty) as fin:
    print(fin.read())

注意:eval 流程返回一个 json_object

# load prompty as a flow
eval_flow = Prompty.load(eval_prompty)
# execute the flow as function
result = eval_flow(question=question, answer=result, messages=[])
result

2. 使用多行数据进行批量运行#

from promptflow.client import PFClient

flow = "chat.prompty"  # path to the prompty file
data = "./data.jsonl"  # path to the data file

# create run with the flow and data
pf = PFClient()
base_run = pf.run(
    flow=flow,
    data=data,
    column_mapping={
        "question": "${data.question}",
        "chat_history": "${data.chat_history}",
    },
    stream=True,
)
details = pf.get_details(base_run)
details.head(10)

3. 评估你的提示#

然后你可以使用评估提示来评估你的提示。

对之前的批量运行进行评估#

base_run 是我们在上述步骤2中完成的批量运行,用于以“data.jsonl”作为输入的web分类流程。

eval_run = pf.run(
    flow=eval_prompty,
    data="./data.jsonl",  # path to the data file
    run=base_run,  # specify base_run as the run you want to evaluate
    column_mapping={
        "messages": "${data.chat_history}",
        "question": "${data.question}",
        "answer": "${run.outputs.output}",  # TODO refine this mapping
    },
    stream=True,
)
details = pf.get_details(eval_run)
details.head(10)

下一步#

到目前为止,你已经成功运行了你的第一个提示流程,并对其进行了评估。这真是太棒了!

你可以查看更多Prompty示例