Mem0
Mem0(发音为“mem-zero”)通过智能记忆层增强AI助手和智能体,实现个性化AI交互。它能记住用户偏好和特征,并随时间持续更新,非常适合客户支持聊天机器人和AI助手等应用场景。
Mem0 提供两种强大的方式来利用我们的技术:我们的托管平台和我们的开源解决方案。
如果您在 Colab 上打开这个笔记本,您可能需要安装 LlamaIndex 🦙。
%pip install llama-index llama-index-memory-mem0使用 Mem0 平台进行设置
Section titled “Setup with Mem0 Platform”将您的 Mem0 平台 API 密钥设置为环境变量。您可以将 <your-mem0-api-key> 替换为您的实际 API 密钥:
注意:您可以从 Mem0 平台获取您的 Mem0 平台 API 密钥。
import os
os.environ["MEM0_API_KEY"] = "m0-..."使用 from_client(适用于 Mem0 平台 API):
from llama_index.memory.mem0 import Mem0Memory
context = {"user_id": "test_users_1"}memory_from_client = Mem0Memory.from_client( context=context, api_key="m0-...", search_msg_limit=4, # Default is 5)Mem0上下文用于识别Mem0中的用户、智能体或对话。必须至少传入Mem0Memory构造函数中的一个字段。
search_msg_limit 是可选的,默认值为5。它表示从聊天记录中用于Mem0记忆检索的消息数量。更多的消息数量将导致使用更多上下文进行检索,但也会增加检索时间,并可能导致一些不期望的结果。
使用 from_config(适用于 Mem0 开源版本)
os.environ["OPENAI_API_KEY"] = "<your-api-key>"config = { "vector_store": { "provider": "qdrant", "config": { "collection_name": "test_9", "host": "localhost", "port": 6333, "embedding_model_dims": 1536, # Change this according to your local model's dimensions }, }, "llm": { "provider": "openai", "config": { "model": "gpt-4o", "temperature": 0.2, "max_tokens": 1500, }, }, "embedder": { "provider": "openai", "config": {"model": "text-embedding-3-small"}, }, "version": "v1.1",}memory_from_config = Mem0Memory.from_config( context=context, config=config, search_msg_limit=4, # Default is 5)初始化大语言模型
Section titled “Initialize LLM”from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o", api_key="sk-...")Mem0 用于函数调用智能体
Section titled “Mem0 for Function Calling Agents”使用 Mem0 作为 FunctionAgent 智能体的记忆存储。
def call_fn(name: str): """Call the provided name. Args: name: str (Name of the person) """ print(f"Calling... {name}")
def email_fn(name: str): """Email the provided name. Args: name: str (Name of the person) """ print(f"Emailing... {name}")from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent( tools=[email_fn, call_fn], llm=llm,)response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client)print(str(response))/Users/loganmarkewich/Library/Caches/pypoetry/virtualenvs/llama-index-caVs7DDe-py3.10/lib/python3.10/site-packages/mem0/client/main.py:33: DeprecationWarning: output_format='v1.0' is deprecated therefore setting it to 'v1.1' by default.Check out the docs for more information: https://docs.mem0.ai/platform/quickstart#4-1-create-memories return func(*args, **kwargs)
Hello Mayank! How can I assist you today?response = await agent.run( "My preferred way of communication would be Email.", memory=memory_from_client,)print(str(response))Got it, Mayank! Your preferred way of communication is Email. If there's anything specific you need, feel free to let me know!response = await agent.run( "Send me an update of your product.", memory=memory_from_client)print(str(response))Emailing... MayankEmailing... MayankCalling... MayankEmailing... Mayank
I've sent you an update on our product via email. If you have any other questions or need further assistance, feel free to ask!Mem0 用于 ReAct 智能体
Section titled “Mem0 for ReAct Agents”使用 Mem0 作为 ReActAgent 的记忆存储。
from llama_index.core.agent.workflow import ReActAgent
agent = ReActAgent( tools=[call_fn, email_fn], llm=llm,)response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client)print(str(response))response = await agent.run( "My preferred way of communication would be Email.", memory=memory_from_client,)print(str(response))response = await agent.run( "Send me an update of your product.", memory=memory_from_client)print(str(response))response = await agent.run( "First call me and then communicate me requirements.", memory=memory_from_client,)print(str(response))