Prompty 输出格式#

作者:  Open on GitHub Open on GitHubOpen on GitHub

学习目标 - 完成本教程后,您应该能够:

  • 了解如何处理如文本、json_object等提示的输出格式。

  • 了解如何消费prompty的流输出

0. 安装依赖包#

%%capture --no-stderr
%pip install promptflow-devkit

1. 创建必要的连接#

连接帮助安全地存储和管理与LLM和其他外部工具(例如Azure内容安全)交互所需的密钥或其他敏感凭证。

上述提示内部使用了连接 open_ai_connection,如果之前没有添加过,我们需要设置这个连接。创建后,它会被存储在本地数据库中,并可以在任何流程中使用。

按照此说明准备您的Azure OpenAI资源,并获取您的api_key(如果您还没有)。

from promptflow.client import PFClient
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection

# client can help manage your runs and connections.
pf = PFClient()
try:
    conn_name = "open_ai_connection"
    conn = pf.connections.get(name=conn_name)
    print("using existing connection")
except:
    # Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure OpenAI resource.
    connection = AzureOpenAIConnection(
        name=conn_name,
        api_key="<your_AOAI_key>",
        api_base="<your_AOAI_endpoint>",
        api_type="azure",
    )

    # use this if you have an existing OpenAI account
    # connection = OpenAIConnection(
    #     name=conn_name,
    #     api_key="<user-input>",
    # )

    conn = pf.connections.create_or_update(connection)
    print("successfully created connection")

print(conn)

2. 格式化提示输出#

文本输出#

默认情况下,prompty 返回第一个选项的消息。

with open("text_format.prompty") as fin:
    print(fin.read())
from promptflow.core import Prompty

# load prompty as a flow
f = Prompty.load("text_format.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)

# note: the result is a string
result

Json 对象输出#

当用户满足以下条件时,立即返回首选内容作为字典。

  • 在参数中定义 response_formattype: json_object

  • 在模板中指定返回的json格式。

注意:response_format 与 GPT-4 Turbo 和所有比 gpt-3.5-turbo-1106 更新的 GPT-3.5 Turbo 模型兼容。更多详情,请参阅此文档

with open("json_format.prompty") as fin:
    print(fin.read())
from promptflow.core import Prompty

# load prompty as a flow
f = Prompty.load("json_format.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)

# note: the result is a dict
result

所有选择#

当用户将响应配置为all时,prompty将返回包含所有选择的原始LLM响应。

with open("all_response.prompty") as fin:
    print(fin.read())
from promptflow.core import Prompty

# load prompty as a flow
f = Prompty.load("all_response.prompty")
# execute the flow as function
question = "What is the capital of France?"
result = f(first_name="John", last_name="Doe", question=question)

# note: the result is a ChatCompletion object
print(result.choices[0])

流式输出#

当在输出格式为文本的提示参数中配置stream=true时,promptflow sdk将返回一个生成器类型,其项是每个块的内容。

with open("stream_output.prompty") as fin:
    print(fin.read())
from promptflow.core import Prompty

# load prompty as a flow
f = Prompty.load("stream_output.prompty")
# execute the flow as function
question = "What's the steps to get rich?"
result = f(question=question)
for item in result:
    print(item, end="")

注意:当stream=True时,如果响应格式为json_object或响应为all,LLM响应将直接返回。有关处理流响应的更多详细信息,请参阅此文档

批量运行并输出文本#

from promptflow.client import PFClient

data = "./data.jsonl"  # path to the data file

# create run with the flow and data
pf = PFClient()
base_run = pf.run(
    flow="text_format.prompty",
    data=data,
    column_mapping={
        "question": "${data.question}",
    },
    stream=True,
)
details = pf.get_details(base_run)
details.head(10)

批量运行并输出流#

from promptflow.client import PFClient

data = "./data.jsonl"  # path to the data file

# create run with the flow and data
pf = PFClient()
base_run = pf.run(
    flow="stream_output.prompty",
    data=data,
    column_mapping={
        "question": "${data.question}",
    },
    stream=True,
)
details = pf.get_details(base_run)
details.head(10)