简单融合检索器
在本示例中,我们将演示如何结合来自多个查询和多个索引的检索结果。
检索到的节点将作为所有查询和索引中的前k个结果返回,同时处理任何节点的去重。
import osimport openai
os.environ["OPENAI_API_KEY"] = "sk-..."openai.api_key = os.environ["OPENAI_API_KEY"]对于本笔记本,我们将使用我们文档中两个非常相似的页面,每个页面存储在一个独立的索引中。
from llama_index.core import SimpleDirectoryReader
documents_1 = SimpleDirectoryReader( input_files=["../../community/integrations/vector_stores.md"]).load_data()documents_2 = SimpleDirectoryReader( input_files=["../../module_guides/storing/vector_stores.md"]).load_data()from llama_index.core import VectorStoreIndex
index_1 = VectorStoreIndex.from_documents(documents_1)index_2 = VectorStoreIndex.from_documents(documents_2)在此步骤中,我们将索引融合为单个检索器。该检索器还将通过生成与原始问题相关的额外查询来增强我们的查询,并汇总结果。
此设置将查询4次,一次使用您的原始查询,并生成另外3个查询。
默认情况下,它使用以下提示生成额外查询:
QUERY_GEN_PROMPT = ( "You are a helpful assistant that generates multiple search queries based on a " "single input query. Generate {num_queries} search queries, one on each line, " "related to the following input query:\n" "Query: {query}\n" "Queries:\n")from llama_index.core.retrievers import QueryFusionRetriever
retriever = QueryFusionRetriever( [index_1.as_retriever(), index_2.as_retriever()], similarity_top_k=2, num_queries=4, # set this to 1 to disable query generation use_async=True, verbose=True, # query_gen_prompt="...", # we could override the query generation prompt here)# apply nested async to run in a notebookimport nest_asyncio
nest_asyncio.apply()nodes_with_scores = retriever.retrieve("How do I setup a chroma vector store?")Generated queries:1. What are the steps to set up a chroma vector store?2. Best practices for configuring a chroma vector store3. Troubleshooting common issues when setting up a chroma vector storefor node in nodes_with_scores: print(f"Score: {node.score:.2f} - {node.text[:100]}...")Score: 0.78 - # Vector Stores
Vector stores contain embedding vectors of ingested document chunks(and sometimes ...Score: 0.78 - # Using Vector Stores
LlamaIndex offers multiple integration points with vector stores / vector dat...现在,我们可以将检索器接入查询引擎,以合成自然语言响应。
from llama_index.core.query_engine import RetrieverQueryEngine
query_engine = RetrieverQueryEngine.from_args(retriever)response = query_engine.query( "How do I setup a chroma vector store? Can you give an example?")Generated queries:1. How to set up a chroma vector store?2. Step-by-step guide for creating a chroma vector store.3. Examples of chroma vector store setups and configurations.from llama_index.core.response.notebook_utils import display_response
display_response(response)Final Response: 要设置一个 Chroma 向量存储,你需要按照以下步骤操作:
- 导入必要的库:
import chromadbfrom llama_index.vector_stores.chroma import ChromaVectorStore- 创建一个 Chroma 客户端:
chroma_client = chromadb.EphemeralClient()chroma_collection = chroma_client.create_collection("quickstart")- 构建向量存储:
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)以下是如何使用上述步骤设置 Chroma 向量存储的示例:
import chromadbfrom llama_index.vector_stores.chroma import ChromaVectorStore
# Creating a Chroma client# EphemeralClient operates purely in-memory, PersistentClient will also save to diskchroma_client = chromadb.EphemeralClient()chroma_collection = chroma_client.create_collection("quickstart")
# construct vector storevector_store = ChromaVectorStore(chroma_collection=chroma_collection)此示例演示了如何创建 Chroma 客户端、创建名为“quickstart”的集合,然后使用该集合构建 Chroma 向量存储。