输出解析模块
LlamaIndex 支持与其他框架提供的输出解析模块进行集成。这些输出解析模块可通过以下方式使用:
- 为任何提示/查询提供格式化指令(通过
output_parser.format) - 为LLM输出提供“解析”功能(通过
output_parser.parse)
Guardrails 是一个用于输出模式规范/验证/校正的开源 Python 软件包。请参阅下方的代码示例。
from llama_index.core import VectorStoreIndex, SimpleDirectoryReaderfrom llama_index.output_parsers.guardrails import GuardrailsOutputParserfrom llama_index.llms.openai import OpenAI
# load documents, build indexdocuments = SimpleDirectoryReader("../paul_graham_essay/data").load_data()index = VectorStoreIndex(documents, chunk_size=512)
# define query / output specrail_spec = """<rail version="0.1">
<output> <list name="points" description="Bullet points regarding events in the author's life."> <object> <string name="explanation" format="one-line" on-fail-one-line="noop" /> <string name="explanation2" format="one-line" on-fail-one-line="noop" /> <string name="explanation3" format="one-line" on-fail-one-line="noop" /> </object> </list></output>
<prompt>
Query string here.
@xml_prefix_prompt
{output_schema}
@json_suffix_prompt_v2_wo_none</prompt></rail>"""
# define output parseroutput_parser = GuardrailsOutputParser.from_rail_string( rail_spec, llm=OpenAI())
# Attach output parser to LLMllm = OpenAI(output_parser=output_parser)
# obtain a structured responsequery_engine = index.as_query_engine(llm=llm)response = query_engine.query( "What are the three items the author did growing up?",)print(response)输出:
{'points': [{'explanation': 'Writing short stories', 'explanation2': 'Programming on an IBM 1401', 'explanation3': 'Using microcomputers'}]}Langchain
Section titled “Langchain”Langchain 还提供了可在 LlamaIndex 中使用的输出解析模块。
from llama_index.core import VectorStoreIndex, SimpleDirectoryReaderfrom llama_index.core.output_parsers import LangchainOutputParserfrom llama_index.llms.openai import OpenAIfrom langchain.output_parsers import StructuredOutputParser, ResponseSchema
# load documents, build indexdocuments = SimpleDirectoryReader("../paul_graham_essay/data").load_data()index = VectorStoreIndex.from_documents(documents)
# define output schemaresponse_schemas = [ ResponseSchema( name="Education", description="Describes the author's educational experience/background.", ), ResponseSchema( name="Work", description="Describes the author's work experience/background.", ),]
# define output parserlc_output_parser = StructuredOutputParser.from_response_schemas( response_schemas)output_parser = LangchainOutputParser(lc_output_parser)
# Attach output parser to LLMllm = OpenAI(output_parser=output_parser)
# obtain a structured responsequery_engine = index.as_query_engine(llm=llm)response = query_engine.query( "What are a few things the author did growing up?",)print(str(response))输出:
{'Education': 'Before college, the author wrote short stories and experimented with programming on an IBM 1401.', 'Work': 'The author worked on writing and programming outside of school.'}更多示例: