Skip to main content

自定义API服务器 (自定义格式)

通过LiteLLM调用您自定义的torch-serve / 内部LLM API

info

快速开始

import litellm
from litellm import CustomLLM, completion, get_llm_provider


class MyCustomLLM(CustomLLM):
def completion(self, *args, **kwargs) -> litellm.ModelResponse:
return litellm.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world"}],
mock_response="Hi!",
) # type: ignore

my_custom_llm = MyCustomLLM()

litellm.custom_provider_map = [ # 👈 关键步骤 - 注册处理程序
{"provider": "my-custom-llm", "custom_handler": my_custom_llm}
]

resp = completion(
model="my-custom-llm/my-fake-model",
messages=[{"role": "user", "content": "Hello world!"}],
)

assert resp.choices[0].message.content == "Hi!"

OpenAI代理用法

  1. 设置您的 custom_handler.py 文件
import litellm
from litellm import CustomLLM, completion, get_llm_provider


class MyCustomLLM(CustomLLM):
def completion(self, *args, **kwargs) -> litellm.ModelResponse:
return litellm.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world"}],
mock_response="Hi!",
) # type: ignore

async def acompletion(self, *args, **kwargs) -> litellm.ModelResponse:
return litellm.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world"}],
mock_response="Hi!",
) # type: ignore


my_custom_llm = MyCustomLLM()
  1. 添加到 config.yaml

在下面的配置中,我们传递

python_filename: custom_handler.py custom_handler_instance_name: my_custom_llm。这在步骤1中定义

custom_handler: custom_handler.my_custom_llm

model_list:
- model_name: "test-model"
litellm_params:
model: "openai/text-embedding-ada-002"
- model_name: "my-custom-model"
litellm_params:
model: "my-custom-llm/my-model"

litellm_settings:
custom_provider_map:
- {"provider": "my-custom-llm", "custom_handler": custom_handler.my_custom_llm}
litellm --config /path/to/config.yaml
  1. 测试它!
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "my-custom-model",
"messages": [{"role": "user", "content": "Say \"this is a test\" in JSON!"}],
}'

预期响应

{
"id": "chatcmpl-06f1b9cd-08bc-43f7-9814-a69173921216",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Hi!",
"role": "assistant",
"tool_calls": null,
"function_call": null
}
}
],
"created": 1721955063,
"model": "gpt-3.5-turbo",
"object": "chat.completion",
"system_fingerprint": null,
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}

添加流式支持

这里有一个简单的例子,返回unix纪元秒数,适用于completion + streaming用例。

感谢@Eloy Lafuente提供的代码示例。

import time
from typing import Iterator, AsyncIterator
from litellm.types.utils import GenericStreamingChunk, ModelResponse
from litellm import CustomLLM, completion, acompletion

class UnixTimeLLM(CustomLLM):
def completion(self, *args, **kwargs) -> ModelResponse:
return completion(
model="test/unixtime",
mock_response=str(int(time.time())),
) # type: ignore

async def acompletion(self, *args, **kwargs) -> ModelResponse:
return await acompletion(
model="test/unixtime",
mock_response=str(int(time.time())),
) # type: ignore

def streaming(self, *args, **kwargs) -> Iterator[GenericStreamingChunk]:
generic_streaming_chunk: GenericStreamingChunk = {
"finish_reason": "stop",
"index": 0,
"is_finished": True,
"text": str(int(time.time())),
"tool_use": None,
"usage": {"completion_tokens": 0, "prompt_tokens": 0, "total_tokens": 0},
}
return generic_streaming_chunk # type: ignore

async def astreaming(self, *args, **kwargs) -> AsyncIterator[GenericStreamingChunk]:
generic_streaming_chunk: GenericStreamingChunk = {
"finish_reason": "stop",
"index": 0,
"is_finished": True,
"text": str(int(time.time())),
"tool_use": None,
"usage": {"completion_tokens": 0, "prompt_tokens": 0, "total_tokens": 0},
}
yield generic_streaming_chunk # type: ignore

unixtime = UnixTimeLLM()

图像生成

  1. 设置你的 custom_handler.py 文件
import litellm
from litellm import CustomLLM
from litellm.types.utils import ImageResponse, ImageObject


class MyCustomLLM(CustomLLM):
async def aimage_generation(self, model: str, prompt: str, model_response: ImageResponse, optional_params: dict, logging_obj: Any, timeout: Optional[Union[float, httpx.Timeout]] = None, client: Optional[AsyncHTTPHandler] = None,) -> ImageResponse:
return ImageResponse(
created=int(time.time()),
data=[ImageObject(url="https://example.com/image.png")],
)

my_custom_llm = MyCustomLLM()
  1. 添加到 config.yaml

在下面的配置中,我们传递:

python_filename: custom_handler.py custom_handler_instance_name: my_custom_llm。这是在步骤1中定义的。

custom_handler: custom_handler.my_custom_llm

model_list:
- model_name: "test-model"
litellm_params:
model: "openai/text-embedding-ada-002"
- model_name: "my-custom-model"
litellm_params:
model: "my-custom-llm/my-model"

litellm_settings:
custom_provider_map:
- {"provider": "my-custom-llm", "custom_handler": custom_handler.my_custom_llm}
litellm --config /path/to/config.yaml
  1. 测试它!
curl -X POST 'http://0.0.0.0:4000/v1/images/generations' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "my-custom-model",
"prompt": "A cute baby sea otter",
}'

预期响应

{
"created": 1721955063,
"data": [{"url": "https://example.com/image.png"}],
}

附加参数

附加参数在 completionimage_generation 函数的 optional_params 键中传递。

以下是如何设置:

import litellm
from litellm import CustomLLM, completion, get_llm_provider


class MyCustomLLM(CustomLLM):
def completion(self, *args, **kwargs) -> litellm.ModelResponse:
assert kwargs["optional_params"] == {"my_custom_param": "my-custom-param"} # 👈 在这里检查
return litellm.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world"}],
mock_response="Hi!",
) # type: ignore

my_custom_llm = MyCustomLLM()

litellm.custom_provider_map = [ # 👈 关键步骤 - 注册处理程序
{"provider": "my-custom-llm", "custom_handler": my_custom_llm}
]

resp = completion(model="my-custom-llm/my-model", my_custom_param="my-custom-param")
  1. 设置你的 custom_handler.py 文件
import litellm
from litellm import CustomLLM
from litellm.types.utils import ImageResponse, ImageObject


class MyCustomLLM(CustomLLM):
async def aimage_generation(self, model: str, prompt: str, model_response: ImageResponse, optional_params: dict, logging_obj: Any, timeout: Optional[Union[float, httpx.Timeout]] = None, client: Optional[AsyncHTTPHandler] = None,) -> ImageResponse:
assert optional_params == {"my_custom_param": "my-custom-param"} # 👈 在这里检查
return ImageResponse(
created=int(time.time()),
data=[ImageObject(url="https://example.com/image.png")],
)

my_custom_llm = MyCustomLLM()
  1. 添加到 config.yaml

在下面的配置中,我们传递:

python_filename: custom_handler.py custom_handler_instance_name: my_custom_llm。这是在步骤1中定义的。

custom_handler: custom_handler.my_custom_llm

model_list:
- model_name: "test-model"
litellm_params:
model: "openai/text-embedding-ada-002"
- model_name: "my-custom-model"
litellm_params:
model: "my-custom-llm/my-model"
my_custom_param: "my-custom-param" # 👈 自定义参数

litellm_settings:
custom_provider_map:
- {"provider": "my-custom-llm", "custom_handler": custom_handler.my_custom_llm}
litellm --config /path/to/config.yaml
  1. 测试它!
curl -X POST 'http://0.0.0.0:4000/v1/images/generations' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "my-custom-model",
"prompt": "A cute baby sea otter",
}'

自定义处理程序规范

from litellm.types.utils import GenericStreamingChunk, ModelResponse, ImageResponse
from typing import Iterator, AsyncIterator, Any, Optional, Union
from litellm.llms.base import BaseLLM

class CustomLLMError(Exception): # 用于所有异常的类
def __init__(
self,
status_code,
message,
):
self.status_code = status_code
self.message = message
super().__init__(
self.message
) # 调用基类构造函数并传递所需参数

class CustomLLM(BaseLLM):
def __init__(self) -> None:
super().__init__()

def completion(self, *args, **kwargs) -> ModelResponse:
raise CustomLLMError(status_code=500, message="尚未实现!")

def streaming(self, *args, **kwargs) -> Iterator[GenericStreamingChunk]:
raise CustomLLMError(status_code=500, message="尚未实现!")

async def acompletion(self, *args, **kwargs) -> ModelResponse:
raise CustomLLMError(status_code=500, message="尚未实现!")

async def astreaming(self, *args, **kwargs) -> AsyncIterator[GenericStreamingChunk]:
raise CustomLLMError(status_code=500, message="尚未实现!")

def image_generation(
self,
model: str,
prompt: str,
model_response: ImageResponse,
optional_params: dict,
logging_obj: Any,
timeout: Optional[Union[float, httpx.Timeout]] = None,
client: Optional[HTTPHandler] = None,
) -> ImageResponse:
raise CustomLLMError(status_code=500, message="尚未实现!")

async def aimage_generation(
self,
model: str,
prompt: str,
model_response: ImageResponse,
optional_params: dict,
logging_obj: Any,
timeout: Optional[Union[float, httpx.Timeout]] = None,
client: Optional[AsyncHTTPHandler] = None,
) -> ImageResponse:
raise CustomLLMError(status_code=500, message="尚未实现!")

AI 驱动的创新

在快速发展的技术领域中,人工智能 (AI) 已成为推动创新的关键力量。AI 不仅仅是一个流行词,它正在彻底改变我们生活的几乎每个方面。从医疗保健到金融,从教育到娱乐,AI 正在开启新的可能性,并推动行业向前发展。

医疗保健中的 AI

在医疗保健领域,AI 正在通过自动化诊断和个性化治疗方案来彻底改变患者护理。机器学习算法可以分析大量患者数据,识别模式,并提供准确的诊断。这不仅减少了误诊的风险,还允许医生腾出时间专注于患者护理的其他方面。

此外,AI 驱动的机器人手术正在提高手术的精确度和成功率。这些机器人可以执行复杂的外科手术,其精确度是人手无法比拟的。这导致了更快的恢复时间和更好的患者预后。

金融中的 AI

在金融领域,AI 正在通过自动化交易和风险管理来彻底改变行业。机器学习算法可以实时分析市场数据,识别趋势,并做出明智的投资决策。这不仅提高了盈利能力,还降低了投资风险。

AI 还用于欺诈检测,通过分析交易数据并识别异常模式来识别潜在的欺诈行为。这提高了安全性,并保护金融机构和客户免受欺诈活动的侵害。

教育中的 AI

在教育领域,AI 正在通过个性化学习体验来彻底改变学习。AI 驱动的平台可以根据学生的个人需求和学习风格提供定制化的内容。这提高了学习效果,并允许学生以自己的节奏进步。

此外,AI 驱动的聊天机器人正在为学生提供即时支持,回答他们的问题并提供指导。这创造了一个更具互动性和吸引力的学习环境。

娱乐中的 AI

在娱乐领域,AI 正在通过个性化内容推荐和虚拟现实体验来彻底改变娱乐。AI 算法可以分析用户的偏好,并推荐符合他们兴趣的电影、音乐和游戏。这提高了用户体验,并增加了用户参与度。

AI 还用于创建虚拟现实体验,提供沉浸式的娱乐体验。这包括虚拟音乐会、游戏和电影,为用户提供身临其境的体验。

结论

AI 正在彻底改变我们生活的几乎每个方面,开启新的可能性并推动行业向前发展。随着技术的不断进步,我们可以期待看到 AI 在各个领域的更多创新和应用。

优云智算