跳转到内容

困惑度

Perplexity的Sonar API提供了一种解决方案,将实时、基于事实的网络搜索与高级推理和深度研究能力相结合。

何时使用:

  • 当您的应用程序需要直接从网络获取及时、相关的数据时,例如动态内容更新或当前事件追踪。
  • 对于需要支持复杂用户查询并具备集成推理和深度研究功能的产品,例如数字助手或高级搜索引擎。

在开始之前,请确保安装 llama_indexllama_index

%pip install llama-index-llms-perplexity
!pip install llama-index

截至2025年4月12日 - LLaMa Index中的Perplexity LLM类支持以下模型:

模型上下文长度模型类型
sonar-deep-research128k聊天补全
sonar-reasoning-pro128k聊天补全
sonar-reasoning128k聊天补全
sonar-pro200k聊天补全
sonar128k聊天补全
r1-1776128k聊天补全
  • sonar-prosonar-pro 的最大输出令牌限制为8k。
  • 推理模型输出思维链响应。
  • r1-1776r1-1776 是一个不使用 Perplexity 搜索子系统的离线聊天模型。

您可以在此处找到最新支持的模型 此处
速率限制信息请查看 此处
价格详情请参阅 此处

import getpass
import os
if "PPLX_API_KEY" not in os.environ:
os.environ["PPLX_API_KEY"] = getpass.getpass(
"Enter your Perplexity API key: "
)
from llama_index.llms.perplexity import Perplexity
PPLX_API_KEY = __import__("os").environ.get("PPLX_API_KEY")
llm = Perplexity(api_key=PPLX_API_KEY, model="sonar-pro", temperature=0.2)
# Import the ChatMessage class from the llama_index library.
from llama_index.core.llms import ChatMessage
# Create a list of dictionaries where each dictionary represents a chat message.
# Each dictionary contains a 'role' key (e.g., system or user) and a 'content' key with the corresponding message.
messages_dict = [
{"role": "system", "content": "Be precise and concise."},
{
"role": "user",
"content": "Tell me the latest news about the US Stock Market.",
},
]
# Convert each dictionary in the list to a ChatMessage object using unpacking (**msg) in a list comprehension.
messages = [ChatMessage(**msg) for msg in messages_dict]
# Print the list of ChatMessage objects to verify the conversion.
print(messages)
[ChatMessage(role=<MessageRole.SYSTEM: 'system'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='Be precise and concise.')]), ChatMessage(role=<MessageRole.USER: 'user'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='Tell me the latest news about the US Stock Market.')])]
response = llm.chat(messages)
print(response)
assistant: The latest update on the U.S. stock market indicates a strong performance recently. A significant 10% rally occurred on Wednesday, which contributed substantially to market gains. Additionally, the market closed strongly on Friday, with a 2% increase, ending near the intraday high. This reflects robust momentum, particularly in mega and large-cap growth stocks[1].

对于异步对话处理,使用 achat 方法发送消息并等待响应:

# Asynchronously send the list of chat messages to the LLM using the 'achat' method.
# This method returns a ChatResponse object containing the model's answer.
response = await llm.achat(messages)
print(response)
assistant: The U.S. stock market has recently experienced significant gains. A major rally on Wednesday resulted in a 10% surge, contributing substantially to the market's overall upside. Additionally, the market closed strongly on Friday, with a 2% increase, ending near the intraday high. This performance highlights robust momentum, particularly in mega-cap and large-cap growth stocks[1].

对于需要实时逐令牌接收响应的情况,请使用 stream_chat 方法:

# Call the stream_chat method on the LLM instance, which returns a generator or iterable
# for streaming the chat response one delta (token or chunk) at a time.
response = llm.stream_chat(messages)
# Iterate over each streaming response chunk.
for r in response:
# Print the delta (the new chunk of generated text) without adding a newline.
print(r.delta, end="")
The latest news about the U.S. stock market indicates a strong performance recently. The New York Stock Exchange (NYSE) experienced a significant rally, with a 10% surge on Wednesday, followed by a 2% gain on Friday. This upward momentum brought the market near its intraday high, driven by strength in mega-cap and large-cap growth stocks[1].

类似地,对于异步流式处理,astream_chat 方法提供了一种异步处理响应增量的方式:

# Asynchronously call the astream_chat method on the LLM instance,
# which returns an asynchronous generator that yields response chunks.
resp = await llm.astream_chat(messages)
# Asynchronously iterate over each response chunk from the generator.
# For each chunk (delta), print the chunk's text content.
async for delta in resp:
print(delta.delta, end="")
The latest updates on the U.S. stock market indicate significant positive momentum. The New York Stock Exchange (NYSE) experienced a strong rally, with a notable 10% surge on Wednesday. This was followed by a 2% gain on Friday, closing near the intraday high. The market's performance has been driven by mega and large-cap growth stocks, contributing to the overall upside[1].

Perplexity模型可以轻松封装成llamaindex工具,以便在数据处理或对话工作流中调用。该工具使用由Perplexity驱动的实时生成式搜索,并配置了更新的默认模型("sonar-pro")和启用的enable_search_classifier参数。

以下是如何定义和注册该工具的示例:

from llama_index.core.tools import FunctionTool
from llama_index.llms.perplexity import Perplexity
from llama_index.core.llms import ChatMessage
def query_perplexity(query: str) -> str:
"""
Queries the Perplexity API via the LlamaIndex integration.
This function instantiates a Perplexity LLM with updated default settings
(using model "sonar-pro" and enabling search classifier so that the API can
intelligently decide if a search is needed), wraps the query into a ChatMessage,
and returns the generated response content.
"""
pplx_api_key = (
"your-perplexity-api-key" # Replace with your actual API key
)
llm = Perplexity(
api_key=pplx_api_key,
model="sonar-pro",
temperature=0.7,
enable_search_classifier=True, # This will determine if the search component is necessary in this particular context
)
messages = [ChatMessage(role="user", content=query)]
response = llm.chat(messages)
return response.message.content
# Create the tool from the query_perplexity function
query_perplexity_tool = FunctionTool.from_defaults(fn=query_perplexity)