Skip to main content

修改/拒绝传入请求

  • 在代理上进行llm api调用之前修改数据
  • 在llm api调用之前/在返回响应之前拒绝数据
  • 强制所有openai端点调用使用'user'参数

查看我们的并行请求速率限制器的完整示例

快速开始

  1. 在自定义处理程序中添加一个新的async_pre_call_hook函数

这个函数在litellm完成调用之前被调用,允许你在litellm调用中修改传入的数据查看代码

from litellm.integrations.custom_logger import CustomLogger
import litellm
from litellm.proxy.proxy_server import UserAPIKeyAuth, DualCache
from typing import Optional, Literal

# 这个文件包含了LiteLLM代理的自定义回调
# 定义后,可以在proxy_config.yaml中传递这些回调
class MyCustomHandler(CustomLogger): # https://docs.litellm.ai/docs/observability/custom_callback#callback-class
# 类变量或属性
def __init__(self):
pass

#### 调用钩子 - 仅代理 ####

async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal[
"completion",
"text_completion",
"embeddings",
"image_generation",
"moderation",
"audio_transcription",
]):
data["model"] = "my-new-model"
return data

async def async_post_call_failure_hook(
self,
request_data: dict,
original_exception: Exception,
user_api_key_dict: UserAPIKeyAuth
):
pass

async def async_post_call_success_hook(
self,
data: dict,
user_api_key_dict: UserAPIKeyAuth,
response,
):
pass

async def async_moderation_hook( # 在llm api调用中并行调用
self,
data: dict,
user_api_key_dict: UserAPIKeyAuth,
call_type: Literal["completion", "embeddings", "image_generation", "moderation", "audio_transcription"],
):
pass

async def async_post_call_streaming_hook(
self,
user_api_key_dict: UserAPIKeyAuth,
response: str,
):
pass
proxy_handler_instance = MyCustomHandler()
  1. 将此文件添加到你的代理配置中
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo

litellm_settings:
callbacks: custom_callbacks.proxy_handler_instance # 设置litellm.callbacks = [proxy_handler_instance]
  1. 启动服务器 + 测试请求
$ litellm /path/to/config.yaml
curl --location 'http://0.0.0.0:4000/chat/completions' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "good morning good sir"
}
],
"user": "ishaan-app",
"temperature": 0.2
}'

[测试版] async_moderation_hook

在实际的LLM API调用中并行运行内容审核检查。

在你的自定义处理程序中添加一个新的async_moderation_hook函数

  • 目前仅支持/chat/completion调用。
  • 此函数与实际的LLM API调用并行运行。
  • 如果你的async_moderation_hook引发异常,我们会将该异常返回给用户。
info

我们可能需要在将来更新函数模式,以支持多个端点(例如,接受call_type)。请在尝试此功能时记住这一点

查看我们的Llama Guard内容审核钩子的完整示例

from litellm.integrations.custom_logger import CustomLogger
import litellm
from fastapi import HTTPException

# 这个文件包含了 LiteLLM 代理的自定义回调
# 定义后,这些回调可以在 proxy_config.yaml 中传递
class MyCustomHandler(CustomLogger): # https://docs.litellm.ai/docs/observability/custom_callback#callback-class
# 类变量或属性
def __init__(self):
pass

#### 异步 ####

async def async_log_stream_event(self, kwargs, response_obj, start_time, end_time):
pass

async def async_log_pre_api_call(self, model, messages, kwargs):
pass

async def async_log_success_event(self, kwargs, response_obj, start_time, end_time):
pass

async def async_log_failure_event(self, kwargs, response_obj, start_time, end_time):
pass

#### 调用钩子 - 仅代理 ####

async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal["completion", "embeddings"]):
data["model"] = "my-new-model"
return data

async def async_moderation_hook( ### 👈 关键变化 ###
self,
data: dict,
):
messages = data["messages"]
print(messages)
if messages[0]["content"] == "hello world":
raise HTTPException(
status_code=400, detail={"error": "违反内容安全政策"}
)

proxy_handler_instance = MyCustomHandler()
  1. 将此文件添加到你的代理配置中
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo

litellm_settings:
callbacks: custom_callbacks.proxy_handler_instance # 设置 litellm.callbacks = [proxy_handler_instance]
  1. 启动服务器 + 测试请求
$ litellm /path/to/config.yaml
curl --location 'http://0.0.0.0:4000/chat/completions' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hello world"
}
],
}'

进阶 - 强制使用 'user' 参数

enforce_user_param 设置为 true,要求所有调用 openai 端点的请求都必须包含 'user' 参数。

查看代码

general_settings:
enforce_user_param: True

结果

进阶 - 返回被拒绝的消息作为响应

对于聊天补全和文本补全调用,你可以返回一个被拒绝的消息作为用户响应。

通过返回一个字符串来实现这一点。LiteLLM 会根据端点是否为流式/非流式,以正确的格式返回响应。

对于非聊天/文本补全端点,此响应将作为 400 状态码异常返回。

1. 创建自定义处理器

from litellm.integrations.custom_logger import CustomLogger
import litellm
from litellm.utils import get_formatted_prompt

# 这个文件包含了 LiteLLM 代理的自定义回调
# 一旦定义,可以在 proxy_config.yaml 中传递这些回调
class MyCustomHandler(CustomLogger):
def __init__(self):
pass

#### 调用钩子 - 仅代理 ####

async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal[
"completion",
"text_completion",
"embeddings",
"image_generation",
"moderation",
"audio_transcription",
]) -> Optional[dict, str, Exception]:
formatted_prompt = get_formatted_prompt(data=data, call_type=call_type)

if "Hello world" in formatted_prompt:
return "This is an invalid response"

return data

proxy_handler_instance = MyCustomHandler()

2. 更新 config.yaml

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo

litellm_settings:
callbacks: custom_callbacks.proxy_handler_instance # 设置 litellm.callbacks = [proxy_handler_instance]

3. 测试它!

$ litellm /path/to/config.yaml
curl --location 'http://0.0.0.0:4000/chat/completions' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hello world"
}
],
}'

预期响应

{
"id": "chatcmpl-d00bbede-2d90-4618-bf7b-11a1c23cf360",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "This is an invalid response.", # 👈 被拒绝的响应
"role": "assistant"
}
}
],
"created": 1716234198,
"model": null,
"object": "chat.completion",
"system_fingerprint": null,
"usage": {}
}
优云智算