定义自定义查询引擎¶
您可以(也应该)定义自定义查询引擎,以便接入下游的LlamaIndex工作流,无论您是在构建RAG、智能体还是其他应用。
我们提供了一个CustomQueryEngine
,可以轻松定义您自己的查询。
设置¶
我们首先加载一些示例数据并建立索引。
如果你在Colab上打开这个Notebook,你可能需要安装LlamaIndex 🦙。
In [ ]:
Copied!
%pip install llama-index-llms-openai
%pip install llama-index-llms-openai
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
下载数据
In [ ]:
Copied!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
In [ ]:
Copied!
# load documents
documents = SimpleDirectoryReader("./data//paul_graham/").load_data()
# 加载文档
documents = SimpleDirectoryReader("./data//paul_graham/").load_data()
In [ ]:
Copied!
index = VectorStoreIndex.from_documents(documents)
retriever = index.as_retriever()
index = VectorStoreIndex.from_documents(documents)
retriever = index.as_retriever()
构建自定义查询引擎¶
我们构建了一个自定义查询引擎,用于模拟RAG流程。首先执行检索,然后进行合成。
要定义一个CustomQueryEngine
,你只需要将一些初始化参数定义为属性,并实现custom_query
函数。
默认情况下,custom_query
可以返回一个Response
对象(这是响应合成器返回的),但它也可以只返回一个字符串。这两种方式分别对应选项1和选项2。
In [ ]:
Copied!
from llama_index.core.query_engine import CustomQueryEngine
from llama_index.core.retrievers import BaseRetriever
from llama_index.core import get_response_synthesizer
from llama_index.core.response_synthesizers import BaseSynthesizer
从llama_index.core.query_engine导入CustomQueryEngine
从llama_index.core.retrievers导入BaseRetriever
从llama_index.core导入get_response_synthesizer
从llama_index.core.response_synthesizers导入BaseSynthesizer
选项1 (RAGQueryEngine
)¶
In [ ]:
Copied!
class RAGQueryEngine(CustomQueryEngine):
"""RAG Query Engine."""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
response_obj = self.response_synthesizer.synthesize(query_str, nodes)
return response_obj
class RAGQueryEngine(CustomQueryEngine):
"""RAG查询引擎"""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
response_obj = self.response_synthesizer.synthesize(query_str, nodes)
return response_obj
选项2 (RAGStringQueryEngine
)¶
In [ ]:
Copied!
# Option 2: return a string (we use a raw LLM call for illustration)
from llama_index.llms.openai import OpenAI
from llama_index.core import PromptTemplate
qa_prompt = PromptTemplate(
"Context information is below.\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"Given the context information and not prior knowledge, "
"answer the query.\n"
"Query: {query_str}\n"
"Answer: "
)
class RAGStringQueryEngine(CustomQueryEngine):
"""RAG String Query Engine."""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
llm: OpenAI
qa_prompt: PromptTemplate
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
context_str = "\n\n".join([n.node.get_content() for n in nodes])
response = self.llm.complete(
qa_prompt.format(context_str=context_str, query_str=query_str)
)
return str(response)
# 选项2:返回字符串(我们使用原始LLM调用进行演示)
from llama_index.llms.openai import OpenAI
from llama_index.core import PromptTemplate
qa_prompt = PromptTemplate(
"上下文信息如下。\n"
"---------------------\n"
"{context_str}\n"
"---------------------\n"
"根据上下文信息而非先验知识,"
"回答查询。\n"
"查询:{query_str}\n"
"回答:"
)
class RAGStringQueryEngine(CustomQueryEngine):
"""RAG字符串查询引擎。"""
retriever: BaseRetriever
response_synthesizer: BaseSynthesizer
llm: OpenAI
qa_prompt: PromptTemplate
def custom_query(self, query_str: str):
nodes = self.retriever.retrieve(query_str)
context_str = "\n\n".join([n.node.get_content() for n in nodes])
response = self.llm.complete(
qa_prompt.format(context_str=context_str, query_str=query_str)
)
return str(response)
尝试一下¶
我们现在在示例数据上尝试一下。
尝试选项1 (RAGQueryEngine
)¶
In [ ]:
Copied!
synthesizer = get_response_synthesizer(response_mode="compact")
query_engine = RAGQueryEngine(
retriever=retriever, response_synthesizer=synthesizer
)
synthesizer = get_response_synthesizer(response_mode="compact")
query_engine = RAGQueryEngine(
retriever=retriever, response_synthesizer=synthesizer
)
In [ ]:
Copied!
response = query_engine.query("What did the author do growing up?")
response = query_engine.query("作者在成长过程中做了什么?")
In [ ]:
Copied!
print(str(response))
print(str(response))
The author worked on writing and programming outside of school before college. They wrote short stories and tried writing programs on an IBM 1401 computer using an early version of Fortran. They also mentioned getting a microcomputer, building it themselves, and writing simple games and programs on it.
In [ ]:
Copied!
print(response.source_nodes[0].get_content())
print(response.source_nodes[0].get_content())
尝试选项2 (RAGStringQueryEngine
)¶
In [ ]:
Copied!
llm = OpenAI(model="gpt-3.5-turbo")
query_engine = RAGStringQueryEngine(
retriever=retriever,
response_synthesizer=synthesizer,
llm=llm,
qa_prompt=qa_prompt,
)
llm = OpenAI(model="gpt-3.5-turbo")
query_engine = RAGStringQueryEngine(
retriever=retriever,
response_synthesizer=synthesizer,
llm=llm,
qa_prompt=qa_prompt,
)
In [ ]:
Copied!
response = query_engine.query("What did the author do growing up?")
response = query_engine.query("作者在成长过程中做了什么?")
In [ ]:
Copied!
print(str(response))
print(str(response))
The author worked on writing and programming before college. They wrote short stories and started programming on the IBM 1401 computer in 9th grade. They later got a microcomputer and continued programming, writing simple games and a word processor.