跳转到内容

定义自定义查询引擎

您可以(也应该)定义自定义查询引擎,以便接入下游的LlamaIndex工作流,无论您是在构建RAG、智能体还是其他应用程序。

我们提供了一个 CustomQueryEngine,让您可以轻松定义自己的查询。

我们首先加载一些样本数据并建立索引。

如果您在 Colab 上打开这个笔记本,您可能需要安装 LlamaIndex 🦙。

%pip install llama-index-llms-openai
!pip install llama-index
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

下载数据

!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
# load documents
documents = SimpleDirectoryReader("./data//paul_graham/").load_data()
index = VectorStoreIndex.from_documents(documents)
retriever = index.as_retriever()

我们构建了一个模拟RAG管道的自定义查询引擎。首先执行检索,然后进行合成。

要定义一个 CustomQueryEngine,你只需要定义一些初始化参数作为属性,并实现 custom_query 函数。

默认情况下,custom_query 可以返回一个 Response 对象(由响应合成器返回),但它也可以仅返回一个字符串。这些分别是选项1和选项2。

from llama_index.core.query_engine import CustomQueryEngine
from llama_index.core.retrievers import BaseRetriever
from llama_index.core import get_response_synthesizer
from llama_index.core.response_synthesizers import BaseSynthesizer
class RAGQueryEngine(CustomQueryEngine):
"""RAG Query Engine."""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
response_obj = self.response_synthesizer.synthesize(query_str, nodes)
return response_obj
# Option 2: return a string (we use a raw LLM call for illustration)
from llama_index.llms.openai import OpenAI
from llama_index.core import PromptTemplate
qa_prompt = PromptTemplate(
"Context information is below.\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"Given the context information and not prior knowledge, "
"answer the query.\n"
"Query: {query_str}\n"
"Answer: "
)
class RAGStringQueryEngine(CustomQueryEngine):
"""RAG String Query Engine."""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
llm: OpenAI
qa_prompt: PromptTemplate
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
context_str = "\n\n".join([n.node.get_content() for n in nodes])
response = self.llm.complete(
qa_prompt.format(context_str=context_str, query_str=query_str)
)
return str(response)

我们现在在示例数据上尝试一下。

synthesizer = get_response_synthesizer(response_mode="compact")
query_engine = RAGQueryEngine(
retriever=retriever, response_synthesizer=synthesizer
)
response = query_engine.query("What did the author do growing up?")
print(str(response))
The author worked on writing and programming outside of school before college. They wrote short stories and tried writing programs on an IBM 1401 computer using an early version of Fortran. They also mentioned getting a microcomputer, building it themselves, and writing simple games and programs on it.
print(response.source_nodes[0].get_content())
llm = OpenAI(model="gpt-3.5-turbo")
query_engine = RAGStringQueryEngine(
retriever=retriever,
response_synthesizer=synthesizer,
llm=llm,
qa_prompt=qa_prompt,
)
response = query_engine.query("What did the author do growing up?")
print(str(response))
The author worked on writing and programming before college. They wrote short stories and started programming on the IBM 1401 computer in 9th grade. They later got a microcomputer and continued programming, writing simple games and a word processor.