向量存储索引使用示例¶
在本指南中,我们将展示如何与不同的向量存储实现一起使用向量存储索引。
从如何使用默认的内存向量存储和默认查询配置,仅需几行代码即可入门,到使用自定义托管的向量存储,以及诸如元数据过滤器等高级设置。
In [ ]:
Copied!
from llama_index import VectorStoreIndex, SimpleDirectoryReader
# Load documents and build index
documents = SimpleDirectoryReader(
"../../examples/data/paul_graham"
).load_data()
index = VectorStoreIndex.from_documents(documents)
from llama_index import VectorStoreIndex, SimpleDirectoryReader
# 加载文档并构建索引
documents = SimpleDirectoryReader(
"../../examples/data/paul_graham"
).load_data()
index = VectorStoreIndex.from_documents(documents)
自定义向量存储
你可以按如下方式使用自定义向量存储(本例中为PineconeVectorStore):
In [ ]:
Copied!
import pinecone
from llama_index import VectorStoreIndex, SimpleDirectoryReader, StorageContext
from llama_index.vector_stores import PineconeVectorStore
# init pinecone
pinecone.init(api_key="<api_key>", environment="<environment>")
pinecone.create_index(
"quickstart", dimension=1536, metric="euclidean", pod_type="p1"
)
# construct vector store and customize storage context
storage_context = StorageContext.from_defaults(
vector_store=PineconeVectorStore(pinecone.Index("quickstart"))
)
# Load documents and build index
documents = SimpleDirectoryReader(
"../../examples/data/paul_graham"
).load_data()
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
import pinecone
from llama_index import VectorStoreIndex, SimpleDirectoryReader, StorageContext
from llama_index.vector_stores import PineconeVectorStore
# init pinecone
pinecone.init(api_key="", environment="")
pinecone.create_index(
"quickstart", dimension=1536, metric="euclidean", pod_type="p1"
)
# construct vector store and customize storage context
storage_context = StorageContext.from_defaults(
vector_store=PineconeVectorStore(pinecone.Index("quickstart"))
)
# Load documents and build index
documents = SimpleDirectoryReader(
"../../examples/data/paul_graham"
).load_data()
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
有关如何初始化不同向量存储的更多示例,请参阅Vector Store Integrations。
连接到外部向量存储(使用现有嵌入)¶
如果您已经计算了嵌入向量并将其转储到外部向量存储(如Pinecone、Chroma)中,可以通过以下方式与LlamaIndex一起使用:
In [ ]:
Copied!
vector_store = PineconeVectorStore(pinecone.Index("quickstart"))
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
vector_store = PineconeVectorStore(pinecone.Index("quickstart"))
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
In [ ]:
Copied!
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
query_engine = index.as_query_engine()
response = query_engine.query("作者在成长过程中做了什么?")
配置标准查询设置
要配置查询设置,您可以在构建查询引擎时直接将其作为关键字参数传递:
In [ ]:
Copied!
from llama_index.vector_stores.types import ExactMatchFilter, MetadataFilters
query_engine = index.as_query_engine(
similarity_top_k=3,
vector_store_query_mode="default",
filters=MetadataFilters(
filters=[
ExactMatchFilter(key="name", value="paul graham"),
]
),
alpha=None,
doc_ids=None,
)
response = query_engine.query("what did the author do growing up?")
from llama_index.vector_stores.types import ExactMatchFilter, MetadataFilters
query_engine = index.as_query_engine(
similarity_top_k=3,
vector_store_query_mode="default",
filters=MetadataFilters(
filters=[
ExactMatchFilter(key="name", value="paul graham"),
]
),
alpha=None,
doc_ids=None,
)
response = query_engine.query("作者在成长过程中做了什么?")
请注意,元数据过滤是针对Node.metadata中指定的元数据应用的。
或者,如果您正在使用较低级别的组合API:
In [ ]:
Copied!
from llama_index import get_response_synthesizer
from llama_index.indices.vector_store.retrievers import VectorIndexRetriever
from llama_index.query_engine.retriever_query_engine import (
RetrieverQueryEngine,
)
# build retriever
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=3,
vector_store_query_mode="default",
filters=[ExactMatchFilter(key="name", value="paul graham")],
alpha=None,
doc_ids=None,
)
# build query engine
query_engine = RetrieverQueryEngine(
retriever=retriever, response_synthesizer=get_response_synthesizer()
)
# query
response = query_engine.query("what did the author do growing up?")
from llama_index import get_response_synthesizer
from llama_index.indices.vector_store.retrievers import VectorIndexRetriever
from llama_index.query_engine.retriever_query_engine import (
RetrieverQueryEngine,
)
# 构建检索器
retriever = VectorIndexRetriever(
index=index,
similarity_top_k=3,
vector_store_query_mode="default",
filters=[ExactMatchFilter(key="name", value="paul graham")],
alpha=None,
doc_ids=None,
)
# 构建查询引擎
query_engine = RetrieverQueryEngine(
retriever=retriever, response_synthesizer=get_response_synthesizer()
)
# 查询
response = query_engine.query("作者在成长过程中做了什么?")
配置向量存储特定的关键字参数
您还可以通过传入vector_store_kwargs来自定义特定向量存储实现独有的关键字参数
In [ ]:
Copied!
query_engine = index.as_query_engine(
similarity_top_k=3,
# only works for pinecone
vector_store_kwargs={
"filter": {"name": "paul graham"},
},
)
response = query_engine.query("what did the author do growing up?")
query_engine = index.as_query_engine(
similarity_top_k=3,
# 仅适用于pinecone
vector_store_kwargs={
"filter": {"name": "paul graham"},
},
)
response = query_engine.query("作者在成长过程中做了什么?")
使用自动检索器
你也可以使用LLM自动为你决定查询设置! 目前,我们支持自动设置精确匹配元数据过滤器和top k参数。
In [ ]:
Copied!
from llama_index import get_response_synthesizer
from llama_index.indices.vector_store.retrievers import (
VectorIndexAutoRetriever,
)
from llama_index.query_engine.retriever_query_engine import (
RetrieverQueryEngine,
)
from llama_index.vector_stores.types import MetadataInfo, VectorStoreInfo
vector_store_info = VectorStoreInfo(
content_info="brief biography of celebrities",
metadata_info=[
MetadataInfo(
name="category",
type="str",
description="Category of the celebrity, one of [Sports, Entertainment, Business, Music]",
),
MetadataInfo(
name="country",
type="str",
description="Country of the celebrity, one of [United States, Barbados, Portugal]",
),
],
)
# build retriever
retriever = VectorIndexAutoRetriever(
index, vector_store_info=vector_store_info
)
# build query engine
query_engine = RetrieverQueryEngine(
retriever=retriever, response_synthesizer=get_response_synthesizer()
)
# query
response = query_engine.query(
"Tell me about two celebrities from United States"
)
from llama_index import get_response_synthesizer
from llama_index.indices.vector_store.retrievers import (
VectorIndexAutoRetriever,
)
from llama_index.query_engine.retriever_query_engine import (
RetrieverQueryEngine,
)
from llama_index.vector_stores.types import MetadataInfo, VectorStoreInfo
vector_store_info = VectorStoreInfo(
content_info="名人简介",
metadata_info=[
MetadataInfo(
name="category",
type="str",
description="名人分类,可选值:[体育、娱乐、商业、音乐]",
),
MetadataInfo(
name="country",
type="str",
description="名人国家,可选值:[美国、巴巴多斯、葡萄牙]",
),
],
)
# 构建检索器
retriever = VectorIndexAutoRetriever(
index, vector_store_info=vector_store_info
)
# 构建查询引擎
query_engine = RetrieverQueryEngine(
retriever=retriever, response_synthesizer=get_response_synthesizer()
)
# 查询
response = query_engine.query(
"告诉我两位来自美国的名人"
)