LiteLLM - 入门指南
https://github.com/BerriAI/litellm
使用OpenAI输入/输出格式调用100多个LLM
- 将输入转换为供应商的
completion
、embedding
和image_generation
端点 - 一致的输出,文本响应始终可在
['choices'][0]['message']['content']
获取 - 在多个部署(如Azure/OpenAI)之间进行重试/回退逻辑 - 路由器
- 跟踪支出并为每个项目设置预算 LiteLLM代理服务器
如何使用LiteLLM
你可以通过以下方式使用litellm:
- LiteLLM代理服务器 - 服务器(LLM网关)来调用100多个LLM,负载均衡,跨项目的成本跟踪
- LiteLLM Python SDK - Python客户端调用100多个LLM,负载均衡,成本跟踪
何时使用LiteLLM代理服务器(LLM网关)
tip
如果你希望通过一个中央服务(LLM网关)来访问多个LLM,请使用LiteLLM代理服务器
通常由生成AI启用/ML平台团队使用
- LiteLLM代理为您提供了一个统一的界面来访问多个LLM(100+ LLMs)
- 跟踪LLM使用情况并设置防护措施
- 为每个项目定制日志、保护措施、缓存
何时使用LiteLLM Python SDK
tip
如果你希望在Python代码中使用LiteLLM,请使用LiteLLM Python SDK
通常由开发人员在构建LLM项目时使用
- LiteLLM SDK为您提供了一个统一的界面来访问多个LLM(100+ LLMs)
- 在多个部署(如Azure/OpenAI)之间进行重试/回退逻辑 - 路由器
LiteLLM Python SDK
基本使用
pip install litellm
- OpenAI
- Anthropic
- VertexAI
- HuggingFace
- Azure OpenAI
- Ollama
- Openrouter
from litellm import completion
import os
## 设置环境变量
os.environ["OPENAI_API_KEY"] = "your-api-key"
response = completion(
model="gpt-3.5-turbo",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
import os
## 设置环境变量
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
response = completion(
model="claude-2",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
import os
# 身份验证: 运行 'gcloud auth application-default'
os.environ["VERTEX_PROJECT"] = "hardy-device-386718"
os.environ["VERTEX_LOCATION"] = "us-central1"
response = completion(
model="chat-bison",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
import os
os.environ["HUGGINGFACE_API_KEY"] = "huggingface_api_key"
# 例如,调用托管在HF推理端点上的'WizardLM/WizardCoder-Python-34B-V1.0'
response = completion(
model="huggingface/WizardLM/WizardCoder-Python-34B-V1.0",
messages=[{ "content": "Hello, how are you?","role": "user"}],
api_base="https://my-endpoint.huggingface.cloud"
)
print(response)
from litellm import completion
import os
## 设置环境变量
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""
# azure调用
response = completion(
"azure/<your_deployment_name>",
messages = [{ "content": "Hello, how are you?","role": "user"}]
)
from litellm import completion
response = completion(
model="ollama/llama2",
messages = [{ "content": "Hello, how are you?","role": "user"}],
api_base="http://localhost:11434"
)
from litellm import completion
import os
## 设置环境变量
os.environ["OPENROUTER_API_KEY"] = "openrouter_api_key"
response = completion(
model="openrouter/google/palm-2-chat-bison",
messages = [{ "content": "Hello, how are you?","role": "user"}],
)
流式传输
在completion
参数中设置stream=True
。
- OpenAI
- Anthropic
- VertexAI
- HuggingFace
- Azure OpenAI
- Ollama
- Openrouter
from litellm import completion
import os
## 设置环境变量
os.environ["OPENAI_API_KEY"] = "your-api-key"
response = completion(
model="gpt-3.5-turbo",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
import os
## set ENV variables
os.environ["ANTHROPIC_API_KEY"] = "your-api-key"
response = completion(
model="claude-2",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
import os
# auth: run 'gcloud auth application-default'
os.environ["VERTEX_PROJECT"] = "hardy-device-386718"
os.environ["VERTEX_LOCATION"] = "us-central1"
response = completion(
model="chat-bison",
messages=[{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
import os
os.environ["HUGGINGFACE_API_KEY"] = "huggingface_api_key"
# e.g. Call 'WizardLM/WizardCoder-Python-34B-V1.0' hosted on HF Inference endpoints
response = completion(
model="huggingface/WizardLM/WizardCoder-Python-34B-V1.0",
messages=[{ "content": "Hello, how are you?","role": "user"}],
api_base="https://my-endpoint.huggingface.cloud",
stream=True,
)
print(response)
from litellm import completion
import os
## set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""
# azure call
response = completion(
"azure/<your_deployment_name>",
messages = [{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
from litellm import completion
response = completion(
model="ollama/llama2",
messages = [{ "content": "Hello, how are you?","role": "user"}],
api_base="http://localhost:11434",
stream=True,
)
from litellm import completion
import os
## 设置环境变量
os.environ["OPENROUTER_API_KEY"] = "openrouter_api_key"
response = completion(
model="openrouter/google/palm-2-chat-bison",
messages = [{ "content": "Hello, how are you?","role": "user"}],
stream=True,
)
异常处理
LiteLLM将所有支持的供应商的异常映射到OpenAI的异常。我们的所有异常都继承自OpenAI的异常类型,因此您为OpenAI设置的任何错误处理应该可以与LiteLLM无缝协作。
from openai.error import OpenAIError
from litellm import completion
os.environ["ANTHROPIC_API_KEY"] = "bad-key"
try:
# 一些代码
completion(model="claude-instant-1", messages=[{"role": "user", "content": "Hey, how's it going?"}])
except OpenAIError as e:
print(e)
日志记录观察性 - 记录LLM的输入/输出 (文档)
LiteLLM暴露了预定义的回调,用于将数据发送到Lunary、Langfuse、Helicone、Promptlayer、Traceloop、Slack。
from litellm import completion
## 设置日志记录工具的环境变量
os.environ["HELICONE_API_KEY"] = "your-helicone-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
os.environ["OPENAI_API_KEY"]
# 设置回调
litellm.success_callback = ["lunary", "langfuse", "helicone"] # 记录输入/输出到lunary, langfuse, supabase, helicone
#openai 调用
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}])
跟踪成本、使用情况、流式处理的延迟
使用回调函数来实现这一点 - 有关自定义回调的更多信息:https://docs.litellm.ai/docs/observability/custom_callback
import litellm
# track_cost_callback
def track_cost_callback(
kwargs, # 完成调用的kwargs
completion_response, # 完成的响应
start_time, end_time # 开始/结束时间
):
try:
response_cost = kwargs.get("response_cost", 0)
print("streaming response_cost", response_cost)
except:
pass
# 设置回调
litellm.success_callback = [track_cost_callback] # 设置自定义回调函数
# litellm.completion() 调用
response = completion(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Hi 👋 - i'm openai"
}
],
stream=True
)
LiteLLM代理服务器 (LLM网关)
跟踪多个项目/人员的花费
代理提供:
📖 代理端点 - Swagger文档
在这里获取包含密钥和速率限制的完整教程 - 这里
快速开始代理 - CLI
pip install 'litellm[proxy]'
步骤1: 启动litellm代理
- pip包
- Docker容器
$ litellm --model huggingface/bigcode/starcoder
#INFO: 代理运行在 http://0.0.0.0:4000
步骤1. 创建config.yaml
示例 litellm_config.yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/<your-azure-model-deployment>
api_base: os.environ/AZURE_API_BASE # 运行 os.getenv("AZURE_API_BASE")
api_key: os.environ/AZURE_API_KEY # 运行 os.getenv("AZURE_API_KEY")
api_version: "2023-07-01-preview"
步骤2. 运行Docker镜像
docker run \
-v $(pwd)/litellm_config.yaml:/app/config.yaml \
-e AZURE_API_KEY=d6*********** \
-e AZURE_API_BASE=https://openai-***********/ \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-latest \
--config /app/config.yaml --detailed_debug
步骤2: 向代理发出ChatCompletions请求
import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # 将代理设置为base_url
# 请求发送到litellm代理上设置的模型,`litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "这是一个测试请求,写一首短诗"
}
])
print(response)