使用模式(响应评估)#
使用 BaseEvaluator
#
LlamaIndex中的所有评估模块都实现了BaseEvaluator
类,包含两个主要方法:
evaluate
方法接收query
、contexts
、response
以及其他关键字参数。
def evaluate(
self,
query: Optional[str] = None,
contexts: Optional[Sequence[str]] = None,
response: Optional[str] = None,
**kwargs: Any,
) -> EvaluationResult:
evaluate_response
方法提供了另一种接口,它接收一个 llamaindexResponse
对象(包含响应字符串和源节点),而不是单独的contexts
和response
。
def evaluate_response(
self,
query: Optional[str] = None,
response: Optional[Response] = None,
**kwargs: Any,
) -> EvaluationResult:
功能上与evaluate
相同,只是在使用llamaindex对象时更简单。
使用 EvaluationResult
#
每个评估器在执行时会输出一个EvaluationResult
:
eval_result = evaluator.evaluate(query=..., contexts=..., response=...)
eval_result.passing # binary pass/fail
eval_result.score # numerical score
eval_result.feedback # string feedback
不同的评估器可能会填充结果字段的子集。
评估回答的忠实度(即幻觉问题)#
FaithfulnessEvaluator
用于评估答案是否忠实于检索到的上下文(换句话说,是否存在幻觉)。
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import FaithfulnessEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
# define evaluator
evaluator = FaithfulnessEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
response = query_engine.query(
"What battles took place in New York City in the American Revolution?"
)
eval_result = evaluator.evaluate_response(response=response)
print(str(eval_result.passing))
你也可以选择单独评估每个来源上下文:
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import FaithfulnessEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
# define evaluator
evaluator = FaithfulnessEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
response = query_engine.query(
"What battles took place in New York City in the American Revolution?"
)
response_str = response.response
for source_node in response.source_nodes:
eval_result = evaluator.evaluate(
response=response_str, contexts=[source_node.get_content()]
)
print(str(eval_result.passing))
您将获得一个结果列表,对应于response.source_nodes
中的每个源节点。
评估查询与回答的相关性#
RelevancyEvaluator
用于评估检索到的上下文和答案是否与给定查询相关且一致。
请注意,除了Response
对象外,该评估器还需要传入query
。
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import RelevancyEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
# define evaluator
evaluator = RelevancyEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
query = "What battles took place in New York City in the American Revolution?"
response = query_engine.query(query)
eval_result = evaluator.evaluate_response(query=query, response=response)
print(str(eval_result))
同样地,您也可以针对特定的源节点进行评估。
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import RelevancyEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
# define evaluator
evaluator = RelevancyEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
query = "What battles took place in New York City in the American Revolution?"
response = query_engine.query(query)
response_str = response.response
for source_node in response.source_nodes:
eval_result = evaluator.evaluate(
query=query,
response=response_str,
contexts=[source_node.get_content()],
)
print(str(eval_result.passing))
问题生成#
LlamaIndex 还可以生成问题来利用您的数据进行回答。与上述评估工具结合使用,您可以在数据上创建完全自动化的评估流程。
from llama_index.core import SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
from llama_index.core.llama_dataset.generator import RagDatasetGenerator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build documents
documents = SimpleDirectoryReader("./data").load_data()
# define generator, generate questions
dataset_generator = RagDatasetGenerator.from_documents(
documents=documents,
llm=llm,
num_questions_per_chunk=10, # set the number of questions per nodes
)
rag_dataset = dataset_generator.generate_questions_from_nodes()
questions = [e.query for e in rag_dataset.examples]
批量评估#
我们还提供了一个批量评估运行器,用于针对多个问题运行一组评估器。
from llama_index.core.evaluation import BatchEvalRunner
runner = BatchEvalRunner(
{"faithfulness": faithfulness_evaluator, "relevancy": relevancy_evaluator},
workers=8,
)
eval_results = await runner.aevaluate_queries(
vector_index.as_query_engine(), queries=questions
)
集成#
我们还集成了社区评估工具。
DeepEval#
DeepEval 提供6种评估器(包括3种RAG评估器,用于检索器和生成器评估),这些评估器由其专有的评估指标驱动。首先,安装 deepeval
:
pip install -U deepeval
然后你可以从deepeval
导入并使用评估器。完整示例如下:
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from deepeval.integrations.llama_index import DeepEvalAnswerRelevancyEvaluator
documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(documents)
rag_application = index.as_query_engine()
# An example input to your RAG application
user_input = "What is LlamaIndex?"
# LlamaIndex returns a response object that contains
# both the output string and retrieved nodes
response_object = rag_application.query(user_input)
evaluator = DeepEvalAnswerRelevancyEvaluator()
evaluation_result = evaluator.evaluate_response(
query=user_input, response=response_object
)
print(evaluation_result)
以下是你可以从deepeval
导入所有6个评估器的方法:
from deepeval.integrations.llama_index import (
DeepEvalAnswerRelevancyEvaluator,
DeepEvalFaithfulnessEvaluator,
DeepEvalContextualRelevancyEvaluator,
DeepEvalSummarizationEvaluator,
DeepEvalBiasEvaluator,
DeepEvalToxicityEvaluator,
)
要了解更多关于如何将deepeval
的评估指标与LlamaIndex结合使用,并充分利用其完整的LLM测试套件,请访问文档。