聊天引擎 - 问题浓缩模式¶
问题浓缩是一种简单的聊天模式,构建在您数据之上的查询引擎基础上。
对于每次聊天交互:
- 首先根据对话上下文和最后一条消息生成一个独立的问题,然后
- 使用浓缩后的问题查询查询引擎以获取响应。
这种方法简单直接,适用于与知识库直接相关的问题。 由于它总是查询知识库,因此在回答诸如"我之前问过你什么?"这样的元问题时可能会遇到困难。
如果你在Colab上打开这个Notebook,你可能需要安装LlamaIndex 🦙。
In [ ]:
Copied!
%pip install llama-index-llms-openai
%pip install llama-index-llms-openai
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
下载数据¶
In [ ]:
Copied!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
5行代码快速入门¶
加载数据并构建索引
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
data = SimpleDirectoryReader(input_dir="./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
data = SimpleDirectoryReader(input_dir="./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
配置聊天引擎
In [ ]:
Copied!
chat_engine = index.as_chat_engine(chat_mode="condense_question", verbose=True)
chat_engine = index.as_chat_engine(chat_mode="condense_question", verbose=True)
与您的数据对话
In [ ]:
Copied!
response = chat_engine.chat("What did Paul Graham do after YC?")
response = chat_engine.chat("Paul Graham在YC之后做了什么?")
Querying with: What was the next step in Paul Graham's career after his involvement with Y Combinator?
In [ ]:
Copied!
print(response)
打印(response)
Paul Graham's next step in his career after his involvement with Y Combinator was to take up painting. He spent most of the rest of 2014 painting and then in March 2015 he started working on Lisp again.
提出一个后续问题
In [ ]:
Copied!
response = chat_engine.chat("What about after that?")
response = chat_engine.chat("之后呢?")
Querying with: What did Paul Graham do after he started working on Lisp again in March 2015?
In [ ]:
Copied!
print(response)
打印(response)
Paul Graham spent the rest of 2015 writing essays and working on his new dialect of Lisp, which he called Arc. He also looked for an apartment to buy and started planning a second still life painting from the same objects.
In [ ]:
Copied!
response = chat_engine.chat("Can you tell me more?")
response = chat_engine.chat("你能详细说明一下吗?")
Querying with: What did Paul Graham do after he started working on Lisp again in March 2015?
In [ ]:
Copied!
print(response)
打印(response)
Paul Graham spent the rest of 2015 writing essays and working on his new dialect of Lisp, which he called Arc. He also looked for an apartment to buy and started planning for a second still life painting.
重置对话状态
In [ ]:
Copied!
chat_engine.reset()
chat_engine.reset()
In [ ]:
Copied!
response = chat_engine.chat("What about after that?")
response = chat_engine.chat("之后呢?")
Querying with: What happens after the current situation?
In [ ]:
Copied!
print(response)
打印(response)
After the current situation, the narrator resumes painting and experimenting with a new kind of still life. He also resumes his old life in New York, now that he is rich. He is able to take taxis and eat in restaurants, which is exciting for a while. He also starts to make connections with other people who are trying to paint in New York.
流式支持¶
In [ ]:
Copied!
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
data = SimpleDirectoryReader(input_dir="../data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
data = SimpleDirectoryReader(input_dir="../data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
In [ ]:
Copied!
chat_engine = index.as_chat_engine(
chat_mode="condense_question", llm=llm, verbose=True
)
chat_engine = index.as_chat_engine(
chat_mode="condense_question", llm=llm, verbose=True
)
In [ ]:
Copied!
response = chat_engine.stream_chat("What did Paul Graham do after YC?")
for token in response.response_gen:
print(token, end="")
response = chat_engine.stream_chat("Paul Graham在YC之后做了什么?")
for token in response.response_gen:
print(token, end="")
Querying with: What did Paul Graham do after leaving YC? After leaving YC, Paul Graham started painting and focused on improving his skills in that area. He then started writing essays again and began working on Lisp.