Portkey 集成¶
Langroid 提供与Portkey的无缝集成,这是一个强大的AI网关,使您能够通过统一的API访问多个LLM提供商,并具备缓存、重试、回退和全面可观测性等高级功能。
什么是Portkey?¶
Portkey是一个位于您的应用程序与各种LLM提供商之间的人工智能网关,提供以下功能:
- 统一API: 通过一个接口访问来自不同提供商的200多个模型
- 可靠性: 自动重试、故障转移和负载均衡
- 可观测性: 详细的日志记录、追踪和分析
- 性能: 智能缓存与请求优化
- 安全性: 虚拟密钥和高级访问控制
- 成本管理: 使用情况跟踪与预算控制
完整文档请访问Portkey Documentation。
快速开始¶
1. 设置¶
首先,在portkey.ai注册一个Portkey账号并获取您的API密钥。
设置您的环境变量,可以直接设置或在您的.env文件中按常规方式设置:
# Required: Portkey API key
export PORTKEY_API_KEY="your-portkey-api-key"
# Required: Provider API keys (for the models you want to use)
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
# ... other provider keys as needed
2. 基本用法¶
import langroid as lr
import langroid.language_models as lm
from langroid.language_models.provider_params import PortkeyParams
# Create an LLM config to use Portkey's OpenAI-compatible API
# (Note that the name `OpenAIGPTConfig` does NOT imply it only works with OpenAI models;
# the name reflects the fact that the config is meant to be used with an
# OpenAI-compatible API, which Portkey provides for multiple LLM providers.)
llm_config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o-mini",
portkey_params=PortkeyParams(
api_key="your-portkey-api-key", # Or set PORTKEY_API_KEY env var
)
)
# Create LLM instance
llm = lm.OpenAIGPT(llm_config)
# Use normally
response = llm.chat("What is the smallest prime number?")
print(response.message)
3. 多服务提供商¶
在不同提供商之间无缝切换:
# OpenAI
config_openai = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o",
)
# Anthropic
config_anthropic = lm.OpenAIGPTConfig(
chat_model="portkey/anthropic/claude-3-5-sonnet-20241022",
)
# Google Gemini
config_gemini = lm.OpenAIGPTConfig(
chat_model="portkey/google/gemini-2.0-flash-lite",
)
高级功能¶
虚拟按键¶
使用虚拟密钥来抽象化提供商管理:
config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o",
portkey_params=PortkeyParams(
virtual_key="vk-your-virtual-key", # Configured in Portkey dashboard
)
)
缓存与性能¶
启用智能缓存以降低成本并提升性能:
config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o-mini",
portkey_params=PortkeyParams(
cache={
"enabled": True,
"ttl": 3600, # 1 hour cache
"namespace": "my-app"
},
cache_force_refresh=False,
)
)
重试策略¶
配置自动重试以提高可靠性:
config = lm.OpenAIGPTConfig(
chat_model="portkey/anthropic/claude-3-haiku-20240307",
portkey_params=PortkeyParams(
retry={
"max_retries": 3,
"backoff": "exponential",
"jitter": True
}
)
)
可观测性与追踪¶
为生产监控添加全面的追踪功能:
import uuid
config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o",
portkey_params=PortkeyParams(
trace_id=f"trace-{uuid.uuid4().hex[:8]}",
metadata={
"user_id": "user-123",
"session_id": "session-456",
"app_version": "1.2.3"
},
user="user-123",
organization="my-org",
custom_headers={
"x-request-source": "langroid",
"x-feature": "chat-completion"
}
)
)
配置参考¶
PortkeyParams 类支持所有 Portkey 功能:
from langroid.language_models.provider_params import PortkeyParams
params = PortkeyParams(
# Authentication
api_key="pk-...", # Portkey API key
virtual_key="vk-...", # Virtual key (optional)
# Observability
trace_id="trace-123", # Request tracing
metadata={"key": "value"}, # Custom metadata
user="user-id", # User identifier
organization="org-id", # Organization identifier
# Performance
cache={ # Caching configuration
"enabled": True,
"ttl": 3600,
"namespace": "my-app"
},
cache_force_refresh=False, # Force cache refresh
# Reliability
retry={ # Retry configuration
"max_retries": 3,
"backoff": "exponential",
"jitter": True
},
# Custom headers
custom_headers={ # Additional headers
"x-custom": "value"
},
# Base URL (usually not needed)
base_url="https://api.portkey.ai" # Portkey API endpoint
)
支持的提供商¶
Portkey支持来自不同提供商的200多种模型。常见的包括:
# OpenAI
"portkey/openai/gpt-4o"
"portkey/openai/gpt-4o-mini"
# Anthropic
"portkey/anthropic/claude-3-5-sonnet-20241022"
"portkey/anthropic/claude-3-haiku-20240307"
# Google
"portkey/google/gemini-2.0-flash-lite"
"portkey/google/gemini-1.5-pro"
# Cohere
"portkey/cohere/command-r-plus"
# Meta
"portkey/meta/llama-3.1-405b-instruct"
# And many more...
查看Portkey文档获取完整列表。
示例¶
Langroid在examples/portkey/目录中包含全面的Portkey示例:
portkey_basic_chat.py- 多服务提供商的基础用法portkey_advanced_features.py- 缓存、重试和可观测性portkey_multi_provider.py- 对比不同供应商的响应结果
运行任意示例:
最佳实践¶
1. 使用环境变量¶
切勿硬编码API密钥:
# .env file
PORTKEY_API_KEY=your_portkey_key
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
2. 实现备用策略¶
使用多个提供商以提高可靠性:
providers = [
("openai", "gpt-4o-mini"),
("anthropic", "claude-3-haiku-20240307"),
("google", "gemini-2.0-flash-lite")
]
for provider, model in providers:
try:
config = lm.OpenAIGPTConfig(
chat_model=f"portkey/{provider}/{model}"
)
llm = lm.OpenAIGPT(config)
return llm.chat(question)
except Exception:
continue # Try next provider
3. 添加有意义的元数据¶
包含上下文以获得更好的可观测性:
params = PortkeyParams(
metadata={
"user_id": user.id,
"feature": "document_qa",
"document_type": "pdf",
"processing_stage": "summary"
}
)
4. 明智地使用缓存¶
为确定性查询启用缓存:
# Good for caching
params = PortkeyParams(
cache={"enabled": True, "ttl": 3600}
)
# Use with deterministic prompts
response = llm.chat("What is the capital of France?")
5. 监控性能¶
使用追踪ID来跟踪请求流程:
import uuid
trace_id = f"trace-{uuid.uuid4().hex[:8]}"
params = PortkeyParams(
trace_id=trace_id,
metadata={"operation": "document_processing"}
)
# Use the same trace_id for related requests
监控与分析¶
Portkey 仪表板¶
查看详细分析请访问 app.portkey.ai:
- 请求/响应日志
- 令牌使用量与成本
- 性能指标(延迟、错误)
- 供应商比较
- 按元数据自定义过滤器
自定义过滤¶
使用元数据和头部信息来过滤请求:
# Tag requests by feature
params = PortkeyParams(
metadata={"feature": "chat", "version": "v2"},
custom_headers={"x-request-type": "production"}
)
然后在仪表板中按以下条件筛选:
- metadata.feature = "chat"
- headers.x-request-type = "production"
故障排除¶
常见问题¶
- 认证错误
- 检查
PORTKEY_API_KEY是否设置正确 -
在Portkey仪表盘中验证API密钥是否有效
-
提供商API密钥缺失
- 设置提供商API密钥(例如,
OPENAI_API_KEY) -
或在Portkey仪表板中使用虚拟密钥
-
模型未找到
- 检查模型名称格式:
portkey/provider/model -
验证模型是否可通过Portkey获取
-
速率限制
- 配置重试参数
- 使用虚拟密钥以优化速率限制管理
调试模式¶
启用详细日志记录:
测试配置¶
验证您的设置:
# Test basic connection
config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o-mini",
max_output_tokens=50
)
llm = lm.OpenAIGPT(config)
response = llm.chat("Hello")
print("✅ Portkey integration working!")
迁移指南¶
来自直接供应商访问¶
如果您当前直接使用服务提供商:
# Before: Direct OpenAI
config = lm.OpenAIGPTConfig(
chat_model="gpt-4o-mini"
)
# After: Through Portkey
config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o-mini"
)
逐步添加高级功能¶
从简单开始,根据需要逐步添加功能:
# Step 1: Basic Portkey
config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o-mini"
)
# Step 2: Add caching
config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o-mini",
portkey_params=PortkeyParams(
cache={"enabled": True, "ttl": 3600}
)
)
# Step 3: Add observability
config = lm.OpenAIGPTConfig(
chat_model="portkey/openai/gpt-4o-mini",
portkey_params=PortkeyParams(
cache={"enabled": True, "ttl": 3600},
metadata={"app": "my-app", "user": "user-123"},
trace_id="trace-abc123"
)
)
资源¶
- Portkey 官网: https://portkey.ai
- Portkey 文档: https://docs.portkey.ai
- Portkey控制面板: https://app.portkey.ai
- 支持的模型: https://docs.portkey.ai/docs/integrations/models
- Langroid示例:
examples/portkey/目录 - API参考文档: https://docs.portkey.ai/docs/api-reference