下一代AI用户界面正在向音频原生体验发展。用户将能够与聊天机器人对话并接收语音回复。在这一范式下已经构建了多个模型,包括GPT-4o和mini omni。
在本指南中,我们将以mini omni为例,带您逐步构建自己的对话聊天应用程序。您可以在下面看到完成后的应用程序演示:
我们的应用程序将实现以下用户体验:
让我们深入了解实现细节。
我们将从用户的麦克风流式传输音频到服务器,并在每个新的音频块上确定用户是否已停止说话。
这是我们的 process_audio 函数:
import numpy as np
from utils import determine_pause
def process_audio(audio: tuple, state: AppState):
if state.stream is None:
state.stream = audio[1]
state.sampling_rate = audio[0]
else:
state.stream = np.concatenate((state.stream, audio[1]))
pause_detected = determine_pause(state.stream, state.sampling_rate, state)
state.pause_detected = pause_detected
if state.pause_detected and state.started_talking:
return gr.Audio(recording=False), state
return None, state此函数接受两个输入:
(sampling_rate, numpy array of audio))我们将使用以下AppState数据类来管理我们的应用程序状态:
from dataclasses import dataclass
@dataclass
class AppState:
stream: np.ndarray | None = None
sampling_rate: int = 0
pause_detected: bool = False
stopped: bool = False
conversation: list = []该函数将新的音频块连接到现有的流中,并检查用户是否已停止说话。如果检测到暂停,则返回更新以停止录制。否则,返回None表示没有变化。
determine_pause 函数的实现是特定于 omni-mini 项目的,可以在 这里 找到。
处理用户的音频后,我们需要生成并流式传输聊天机器人的响应。这是我们的response函数:
import io
import tempfile
from pydub import AudioSegment
def response(state: AppState):
if not state.pause_detected and not state.started_talking:
return None, AppState()
audio_buffer = io.BytesIO()
segment = AudioSegment(
state.stream.tobytes(),
frame_rate=state.sampling_rate,
sample_width=state.stream.dtype.itemsize,
channels=(1 if len(state.stream.shape) == 1 else state.stream.shape[1]),
)
segment.export(audio_buffer, format="wav")
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as f:
f.write(audio_buffer.getvalue())
state.conversation.append({"role": "user",
"content": {"path": f.name,
"mime_type": "audio/wav"}})
output_buffer = b""
for mp3_bytes in speaking(audio_buffer.getvalue()):
output_buffer += mp3_bytes
yield mp3_bytes, state
with tempfile.NamedTemporaryFile(suffix=".mp3", delete=False) as f:
f.write(output_buffer)
state.conversation.append({"role": "assistant",
"content": {"path": f.name,
"mime_type": "audio/mp3"}})
yield None, AppState(conversation=state.conversation)这个函数:
speaking函数生成并流式传输聊天机器人的响应注意:speaking 函数的实现是特定于 omni-mini 项目的,可以在 这里 找到。
现在让我们使用Gradio的Blocks API将所有内容整合在一起:
import gradio as gr
def start_recording_user(state: AppState):
if not state.stopped:
return gr.Audio(recording=True)
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
input_audio = gr.Audio(
label="Input Audio", sources="microphone", type="numpy"
)
with gr.Column():
chatbot = gr.Chatbot(label="Conversation", type="messages")
output_audio = gr.Audio(label="Output Audio", streaming=True, autoplay=True)
state = gr.State(value=AppState())
stream = input_audio.stream(
process_audio,
[input_audio, state],
[input_audio, state],
stream_every=0.5,
time_limit=30,
)
respond = input_audio.stop_recording(
response,
[state],
[output_audio, state]
)
respond.then(lambda s: s.conversation, [state], [chatbot])
restart = output_audio.stop(
start_recording_user,
[state],
[input_audio]
)
cancel = gr.Button("Stop Conversation", variant="stop")
cancel.click(lambda: (AppState(stopped=True), gr.Audio(recording=False)), None,
[state, input_audio], cancels=[respond, restart])
if __name__ == "__main__":
demo.launch()此设置创建了一个用户界面,包含:
该应用程序以0.5秒的块流式传输用户音频,处理它,生成响应,并相应地更新对话历史记录。
本指南演示了如何使用Gradio和mini omni模型构建一个对话式聊天机器人应用程序。您可以调整此框架以创建各种基于音频的聊天机器人演示。要查看完整应用程序的运行情况,请访问Hugging Face Spaces演示:https://huggingface.co/spaces/gradio/omni-mini
请随意尝试不同的模型、音频处理技术或用户界面设计,以创建您自己独特的对话式AI体验!