Skip to main content

OpenAI(文本补全)

LiteLLM 支持 OpenAI 的文本补全模型

必需的 API 密钥

import os 
os.environ["OPENAI_API_KEY"] = "your-api-key"

使用方法

import os 
from litellm import completion

os.environ["OPENAI_API_KEY"] = "your-api-key"

# openai 调用
response = completion(
model = "gpt-3.5-turbo-instruct",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)

使用方法 - LiteLLM 代理服务器

以下是如何通过 LiteLLM 代理服务器调用 OpenAI 模型的方法

1. 在环境中保存密钥

export OPENAI_API_KEY=""

2. 启动代理

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo # `openai/` 前缀将调用 openai.chat.completions.create
api_key: os.environ/OPENAI_API_KEY
- model_name: gpt-3.5-turbo-instruct
litellm_params:
model: text-completion-openai/gpt-3.5-turbo-instruct # `text-completion-openai/` 前缀将调用 openai.completions.create
api_key: os.environ/OPENAI_API_KEY

使用此配置添加所有 openai 模型并使用一个 API 密钥。警告:这不会进行任何负载均衡 这意味着对 gpt-4gpt-3.5-turbogpt-4-turbo-preview 的请求都将通过此路由

model_list:
- model_name: "*" # 所有模型不在配置中的请求都将通过此部署
litellm_params:
model: openai/* # 设置 `openai/` 以使用 openai 路由
api_key: os.environ/OPENAI_API_KEY
$ litellm --model gpt-3.5-turbo-instruct

# 服务器运行在 http://0.0.0.0:4000

3. 测试它

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo-instruct",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'
import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:4000"
)

# 请求发送到 litellm 代理设置的模型,`litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo-instruct", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
])

print(response)

from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.schema import HumanMessage, SystemMessage

chat = ChatOpenAI(
openai_api_base="http://0.0.0.0:4000", # 设置 openai_api_base 为 LiteLLM 代理
model = "gpt-3.5-turbo-instruct",
temperature=0.1
)

messages = [
SystemMessage(
content="You are a helpful assistant that im using to make a test request to."
),
HumanMessage(
content="test from litellm. tell me why it's amazing in 1 sentence"
),
]
response = chat(messages)

print(response)

OpenAI 文本补全模型 / 指令模型

模型名称函数调用
gpt-3.5-turbo-instructresponse = completion(model="gpt-3.5-turbo-instruct", messages=messages)
gpt-3.5-turbo-instruct-0914response = completion(model="gpt-3.5-turbo-instruct-0914", messages=messages)
text-davinci-003response = completion(model="text-davinci-003", messages=messages)
ada-001response = completion(model="ada-001", messages=messages)
curie-001response = completion(model="curie-001", messages=messages)
babbage-001response = completion(model="babbage-001", messages=messages)
babbage-002response = completion(model="babbage-002", messages=messages)
davinci-002response = completion(model="davinci-002", messages=messages)
优云智算