Skip to main content

Perplexity AI (pplx-api)

https://www.perplexity.ai

API Key

# 环境变量
os.environ['PERPLEXITYAI_API_KEY']

示例用法

from litellm import completion
import os

os.environ['PERPLEXITYAI_API_KEY'] = ""
response = completion(
model="perplexity/mistral-7b-instruct",
messages=messages
)
print(response)

示例用法 - 流式处理

from litellm import completion
import os

os.environ['PERPLEXITYAI_API_KEY'] = ""
response = completion(
model="perplexity/mistral-7b-instruct",
messages=messages,
stream=True
)

for chunk in response:
print(chunk)

支持的模型

此处列出的所有模型 https://docs.perplexity.ai/docs/model-cards 都受支持。只需使用 model=perplexity/<模型名称>

模型名称函数调用
pplx-7b-chatcompletion(model="perplexity/pplx-7b-chat", messages)
pplx-70b-chatcompletion(model="perplexity/pplx-70b-chat", messages)
pplx-7b-onlinecompletion(model="perplexity/pplx-7b-online", messages)
pplx-70b-onlinecompletion(model="perplexity/pplx-70b-online", messages)
codellama-34b-instructcompletion(model="perplexity/codellama-34b-instruct", messages)
llama-2-13b-chatcompletion(model="perplexity/llama-2-13b-chat", messages)
llama-2-70b-chatcompletion(model="perplexity/llama-2-70b-chat", messages)
mistral-7b-instructcompletion(model="perplexity/mistral-7b-instruct", messages)
openhermes-2-mistral-7bcompletion(model="perplexity/openhermes-2-mistral-7b", messages)
openhermes-2.5-mistral-7bcompletion(model="perplexity/openhermes-2.5-mistral-7b", messages)
pplx-7b-chat-alphacompletion(model="perplexity/pplx-7b-chat-alpha", messages)
pplx-70b-chat-alphacompletion(model="perplexity/pplx-70b-chat-alpha", messages)

返回引用

Perplexity 支持通过 return_citations=True 返回引用。Perplexity 文档。注意:Perplexity 的这一功能处于 封闭测试 阶段,因此您需要他们授予您访问权限才能从其 API 获取引用。

如果 perplexity 返回引用,LiteLLM 将直接传递它。

info

有关传递更多提供商特定的参数,请点击这里

from litellm import completion
import os

os.environ['PERPLEXITYAI_API_KEY'] = ""
response = completion(
model="perplexity/mistral-7b-instruct",
messages=messages,
return_citations=True
)
print(response)
  1. 将 perplexity 添加到 config.yaml
model_list:
- model_name: "perplexity-model"
litellm_params:
model: "llama-3.1-sonar-small-128k-online"
api_key: os.environ/PERPLEXITY_API_KEY
  1. 启动代理
litellm --config /path/to/config.yaml
  1. 测试它!
curl -L -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
-d '{
"model": "perplexity-model",
"messages": [
{
"role": "user",
"content": "谁赢得了2022年的世界杯?"
}
],
"return_citations": true
}'

使用 OpenAI SDK、Langchain、Instructor 等进行调用

优云智算