Bedrock
快速开始
1. 在LiteLLM配置文件config.yaml中定义保护措施
在guardrails部分定义你的保护措施
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY
guardrails:
- guardrail_name: "bedrock-pre-guard"
litellm_params:
guardrail: bedrock # 支持的值:"aporia", "bedrock", "lakera"
mode: "during_call"
guardrailIdentifier: ff6ujrregl1q # 你在bedrock上的保护措施ID
guardrailVersion: "DRAFT" # 你在bedrock上的保护措施版本
mode支持的值
pre_call在LLM调用之前运行,针对输入post_call在LLM调用之后运行,针对输入和输出during_call在LLM调用期间运行,针对输入 与pre_call相同,但在LLM调用期间并行运行。响应直到保护措施检查完成才返回
2. 启动LiteLLM网关
litellm --config config.yaml --detailed_debug
3. 测试请求
- 调用失败
- 调用成功
由于请求中的ishaan@berri.ai是PII,预期此次调用会失败
curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"guardrails": ["bedrock-guard"]
}'
失败时的预期响应
{
"error": {
"message": {
"error": "违反保护措施策略",
"bedrock_guardrail_response": {
"action": "GUARDRAIL_INTERVENED",
"assessments": [
{
"topicPolicy": {
"topics": [
{
"action": "BLOCKED",
"name": "Coffee",
"type": "DENY"
}
]
}
}
],
"blockedResponse": "抱歉,模型无法回答这个问题。已应用咖啡保护措施",
"output": [
{
"text": "抱歉,模型无法回答这个问题。已应用咖啡保护措施"
}
],
"outputs": [
{
"text": "抱歉,模型无法回答这个问题。已应用咖啡保护措施"
}
],
"usage": {
"contentPolicyUnits": 0,
"contextualGroundingPolicyUnits": 0,
"sensitiveInformationPolicyFreeUnits": 0,
"sensitiveInformationPolicyUnits": 0,
"topicPolicyUnits": 1,
"wordPolicyUnits": 0
}
}
},
"type": "None",
"param": "None",
"code": "400"
}
}
curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi what is the weather"}
],
"guardrails": ["bedrock-guard"]
}'