本示例将介绍如何使用Azure OpenAI服务进行聊天补全。同时还包括内容过滤的相关信息。
设置
首先,我们安装必要的依赖项并导入将要使用的库。
! pip install "openai>=1.0.0,<2.0.0"
! pip install python-dotenvimport os
import openai
import dotenv
dotenv.load_dotenv()认证
Azure OpenAI 服务支持多种认证机制,包括API密钥和Azure Active Directory令牌凭证。
use_azure_active_directory = False # Set this flag to True if you are using Azure Active Directory使用API密钥进行身份验证
要设置OpenAI SDK使用Azure API密钥,我们需要将api_key设置为与您的终端节点关联的密钥(您可以在Azure门户的"资源管理"下的"密钥和终端节点"中找到此密钥)。您还可以在此处找到资源的终端节点。
if not use_azure_active_directory:
endpoint = os.environ["AZURE_OPENAI_ENDPOINT"]
api_key = os.environ["AZURE_OPENAI_API_KEY"]
client = openai.AzureOpenAI(
azure_endpoint=endpoint,
api_key=api_key,
api_version="2023-09-01-preview"
)使用 Azure Active Directory 进行身份验证
现在让我们看看如何通过Azure Active Directory进行身份验证。我们将从安装azure-identity库开始。这个库将提供我们认证所需的令牌凭证,并通过get_bearer_token_provider辅助函数帮助我们构建令牌凭证提供程序。建议使用get_bearer_token_provider而不是向AzureOpenAI提供静态令牌,因为此API会自动为您缓存和刷新令牌。
有关如何设置Azure OpenAI与Azure Active Directory身份验证的更多信息,请参阅文档。
! pip install "azure-identity>=1.15.0"from azure.identity import DefaultAzureCredential, get_bearer_token_provider
if use_azure_active_directory:
endpoint = os.environ["AZURE_OPENAI_ENDPOINT"]
client = openai.AzureOpenAI(
azure_endpoint=endpoint,
azure_ad_token_provider=get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"),
api_version="2023-09-01-preview"
)注意:如果未提供以下参数,AzureOpenAI会从相应的环境变量中推断出这些参数:
api_key来自AZURE_OPENAI_API_KEYazure_ad_token来自AZURE_OPENAI_AD_TOKENapi_version来自OPENAI_API_VERSIONazure_endpoint来自AZURE_OPENAI_ENDPOINT
部署
在本节中,我们将创建一个GPT模型的部署,用于生成聊天补全。
部署:在Azure OpenAI Studio中创建
让我们部署一个模型用于聊天补全功能。前往https://portal.azure.com,找到您的Azure OpenAI资源,然后进入Azure OpenAI Studio。点击"部署"选项卡,为您想用于聊天补全的模型创建一个部署。您为该模型指定的部署名称将用于下面的代码中。
deployment = "" # Fill in the deployment name from the portal here创建聊天完成
现在让我们使用构建的客户端创建一个聊天完成。
# For all possible arguments see https://platform.openai.com/docs/api-reference/chat-completions/create
response = client.chat.completions.create(
model=deployment,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Knock knock."},
{"role": "assistant", "content": "Who's there?"},
{"role": "user", "content": "Orange."},
],
temperature=0,
)
print(f"{response.choices[0].message.role}: {response.choices[0].message.content}")创建流式聊天完成
我们还可以流式传输响应。
response = client.chat.completions.create(
model=deployment,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Knock knock."},
{"role": "assistant", "content": "Who's there?"},
{"role": "user", "content": "Orange."},
],
temperature=0,
stream=True
)
for chunk in response:
if len(chunk.choices) > 0:
delta = chunk.choices[0].delta
if delta.role:
print(delta.role + ": ", end="", flush=True)
if delta.content:
print(delta.content, end="", flush=True)内容过滤
Azure OpenAI服务包含对提示词和生成响应的内容过滤功能。您可以在此了解更多关于内容过滤及其配置方式的信息。
如果提示内容被过滤器标记,该库将抛出一个带有content_filter错误代码的BadRequestError异常。否则,您可以通过响应中的prompt_filter_results和content_filter_results查看内容过滤结果以及被标记的类别。
提示被内容过滤器标记
import json
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "<text violating the content policy>"}
]
try:
completion = client.chat.completions.create(
messages=messages,
model=deployment,
)
except openai.BadRequestError as e:
err = json.loads(e.response.text)
if err["error"]["code"] == "content_filter":
print("Content filter triggered!")
content_filter_result = err["error"]["innererror"]["content_filter_result"]
for category, details in content_filter_result.items():
print(f"{category}:\n filtered={details['filtered']}\n severity={details['severity']}")检查内容过滤器的结果
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the biggest city in Washington?"}
]
completion = client.chat.completions.create(
messages=messages,
model=deployment,
)
print(f"Answer: {completion.choices[0].message.content}")
# prompt content filter result in "model_extra" for azure
prompt_filter_result = completion.model_extra["prompt_filter_results"][0]["content_filter_results"]
print("\nPrompt content filter results:")
for category, details in prompt_filter_result.items():
print(f"{category}:\n filtered={details['filtered']}\n severity={details['severity']}")
# completion content filter result
print("\nCompletion content filter results:")
completion_filter_result = completion.choices[0].model_extra["content_filter_results"]
for category, details in completion_filter_result.items():
print(f"{category}:\n filtered={details['filtered']}\n severity={details['severity']}")