内存#

在某些使用场景中,维护一个有用事实的存储是非常有价值的,这些事实可以在特定步骤之前智能地添加到代理的上下文中。这里的典型使用场景是RAG模式,其中使用查询从数据库中检索相关信息,然后将其添加到代理的上下文中。

AgentChat 提供了一个 Memory 协议,可以扩展以提供此功能。关键方法包括 queryupdate_contextaddclearclose

  • add: 向内存存储中添加新条目

  • query: 从内存存储中检索相关信息

  • update_context: 通过添加检索到的信息来改变代理的内部model_context(在AssistantAgent类中使用)

  • clear: 清除内存存储中的所有条目

  • close: 清理内存存储使用的任何资源

ListMemory 示例#

{py:class}~autogen_core.memory.ListMemory 是 {py:class}~autogen_core.memory.Memory 协议的一个示例实现。它是一个简单的基于列表的内存实现,按时间顺序维护记忆,并将最近的记忆添加到模型的上下文中。该实现设计简单且可预测,使其易于理解和调试。在下面的示例中,我们将使用 ListMemory 来维护用户偏好的记忆库,并演示如何随着时间的推移为代理响应提供一致的上下文。

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_core.memory import ListMemory, MemoryContent, MemoryMimeType
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Initialize user memory
user_memory = ListMemory()

# Add user preferences to memory
await user_memory.add(MemoryContent(content="The weather should be in metric units", mime_type=MemoryMimeType.TEXT))

await user_memory.add(MemoryContent(content="Meal recipe must be vegan", mime_type=MemoryMimeType.TEXT))


async def get_weather(city: str, units: str = "imperial") -> str:
    if units == "imperial":
        return f"The weather in {city} is 73 °F and Sunny."
    elif units == "metric":
        return f"The weather in {city} is 23 °C and Sunny."
    else:
        return f"Sorry, I don't know the weather in {city}."


assistant_agent = AssistantAgent(
    name="assistant_agent",
    model_client=OpenAIChatCompletionClient(
        model="gpt-4o-2024-08-06",
    ),
    tools=[get_weather],
    memory=[user_memory],
)
# Run the agent with a task.
stream = assistant_agent.run_stream(task="What is the weather in New York?")
await Console(stream)
---------- user ----------
What is the weather in New York?
---------- assistant_agent ----------
[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)]
---------- assistant_agent ----------
[FunctionCall(id='call_GpimUGED5zUbfxORaGo2JD6F', arguments='{"city":"New York","units":"metric"}', name='get_weather')]
---------- assistant_agent ----------
[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', call_id='call_GpimUGED5zUbfxORaGo2JD6F', is_error=False)]
---------- assistant_agent ----------
The weather in New York is 23 °C and Sunny.
TaskResult(messages=[TextMessage(source='user', models_usage=None, metadata={}, content='What is the weather in New York?', type='TextMessage'), MemoryQueryEvent(source='assistant_agent', models_usage=None, metadata={}, content=[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)], type='MemoryQueryEvent'), ToolCallRequestEvent(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=123, completion_tokens=20), metadata={}, content=[FunctionCall(id='call_GpimUGED5zUbfxORaGo2JD6F', arguments='{"city":"New York","units":"metric"}', name='get_weather')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant_agent', models_usage=None, metadata={}, content=[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', call_id='call_GpimUGED5zUbfxORaGo2JD6F', is_error=False)], type='ToolCallExecutionEvent'), ToolCallSummaryMessage(source='assistant_agent', models_usage=None, metadata={}, content='The weather in New York is 23 °C and Sunny.', type='ToolCallSummaryMessage')], stop_reason=None)

我们可以检查assistant_agent的model_context是否确实更新了检索到的记忆条目。transform方法用于将检索到的记忆条目格式化为代理可以使用的字符串。在这种情况下,我们简单地将每个记忆条目的内容连接成一个字符串。

await assistant_agent._model_context.get_messages()
[UserMessage(content='What is the weather in New York?', source='user', type='UserMessage'),
 SystemMessage(content='\nRelevant memory content (in chronological order):\n1. The weather should be in metric units\n2. Meal recipe must be vegan\n', type='SystemMessage'),
 AssistantMessage(content=[FunctionCall(id='call_GpimUGED5zUbfxORaGo2JD6F', arguments='{"city":"New York","units":"metric"}', name='get_weather')], thought=None, source='assistant_agent', type='AssistantMessage'),
 FunctionExecutionResultMessage(content=[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', call_id='call_GpimUGED5zUbfxORaGo2JD6F', is_error=False)], type='FunctionExecutionResultMessage')]

我们上面看到,天气按照用户偏好以摄氏度返回。

类似地,假设我们提出一个关于生成膳食计划的单独问题,代理能够从记忆存储中检索相关信息,并提供个性化的(素食)回应。

stream = assistant_agent.run_stream(task="Write brief meal recipe with broth")
await Console(stream)
---------- user ----------
Write brief meal recipe with broth
---------- assistant_agent ----------
[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)]
---------- assistant_agent ----------
Here's a brief vegan broth recipe:

**Vegan Vegetable Broth**

**Ingredients:**
- 2 tablespoons olive oil
- 1 large onion, chopped
- 2 carrots, sliced
- 2 celery stalks, sliced
- 4 cloves garlic, minced
- 1 teaspoon salt
- 1/2 teaspoon pepper
- 1 bay leaf
- 1 teaspoon thyme
- 1 teaspoon rosemary
- 8 cups water
- 1 cup mushrooms, sliced
- 1 cup chopped leafy greens (e.g., kale, spinach)
- 1 tablespoon soy sauce (optional)

**Instructions:**

1. **Sauté Vegetables:** In a large pot, heat olive oil over medium heat. Add the onion, carrots, and celery, and sauté until they begin to soften, about 5-7 minutes.

2. **Add Garlic & Seasonings:** Stir in the garlic, salt, pepper, bay leaf, thyme, and rosemary. Cook for another 2 minutes until fragrant.

3. **Simmer Broth:** Pour in the water, add mushrooms and soy sauce (if using). Increase heat and bring to a boil. Reduce heat and let it simmer for 30-45 minutes.

4. **Add Greens:** In the last 5 minutes of cooking, add the chopped leafy greens.

5. **Strain & Serve:** Remove from heat, strain out the vegetables (or leave them in for a chunkier texture), and adjust seasoning if needed. Serve hot as a base for soups or enjoy as is!

Enjoy your flavorful, nourishing vegan broth! TERMINATE
TaskResult(messages=[TextMessage(source='user', models_usage=None, metadata={}, content='Write brief meal recipe with broth', type='TextMessage'), MemoryQueryEvent(source='assistant_agent', models_usage=None, metadata={}, content=[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)], type='MemoryQueryEvent'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=208, completion_tokens=331), metadata={}, content="Here's a brief vegan broth recipe:\n\n**Vegan Vegetable Broth**\n\n**Ingredients:**\n- 2 tablespoons olive oil\n- 1 large onion, chopped\n- 2 carrots, sliced\n- 2 celery stalks, sliced\n- 4 cloves garlic, minced\n- 1 teaspoon salt\n- 1/2 teaspoon pepper\n- 1 bay leaf\n- 1 teaspoon thyme\n- 1 teaspoon rosemary\n- 8 cups water\n- 1 cup mushrooms, sliced\n- 1 cup chopped leafy greens (e.g., kale, spinach)\n- 1 tablespoon soy sauce (optional)\n\n**Instructions:**\n\n1. **Sauté Vegetables:** In a large pot, heat olive oil over medium heat. Add the onion, carrots, and celery, and sauté until they begin to soften, about 5-7 minutes.\n\n2. **Add Garlic & Seasonings:** Stir in the garlic, salt, pepper, bay leaf, thyme, and rosemary. Cook for another 2 minutes until fragrant.\n\n3. **Simmer Broth:** Pour in the water, add mushrooms and soy sauce (if using). Increase heat and bring to a boil. Reduce heat and let it simmer for 30-45 minutes.\n\n4. **Add Greens:** In the last 5 minutes of cooking, add the chopped leafy greens.\n\n5. **Strain & Serve:** Remove from heat, strain out the vegetables (or leave them in for a chunkier texture), and adjust seasoning if needed. Serve hot as a base for soups or enjoy as is!\n\nEnjoy your flavorful, nourishing vegan broth! TERMINATE", type='TextMessage')], stop_reason=None)

自定义内存存储(向量数据库等)#

你可以基于Memory协议来实现更复杂的内存存储。例如,你可以实现一个自定义的内存存储,使用向量数据库来存储和检索信息,或者实现一个使用机器学习模型根据用户偏好生成个性化响应的内存存储等。

具体来说,你需要重载add, queryupdate_context 方法来实现所需功能,并将内存存储传递给你的代理。

目前以下示例内存存储作为 autogen_ext 扩展包的一部分提供。

  • autogen_ext.memory.chromadb.ChromaDBVectorMemory: 一种使用向量数据库来存储和检索信息的记忆存储。

import os
from pathlib import Path

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_core.memory import MemoryContent, MemoryMimeType
from autogen_ext.memory.chromadb import ChromaDBVectorMemory, PersistentChromaDBVectorMemoryConfig
from autogen_ext.models.openai import OpenAIChatCompletionClient

# Initialize ChromaDB memory with custom config
chroma_user_memory = ChromaDBVectorMemory(
    config=PersistentChromaDBVectorMemoryConfig(
        collection_name="preferences",
        persistence_path=os.path.join(str(Path.home()), ".chromadb_autogen"),
        k=2,  # Return top  k results
        score_threshold=0.4,  # Minimum similarity score
    )
)
# a HttpChromaDBVectorMemoryConfig is also supported for connecting to a remote ChromaDB server

# Add user preferences to memory
await chroma_user_memory.add(
    MemoryContent(
        content="The weather should be in metric units",
        mime_type=MemoryMimeType.TEXT,
        metadata={"category": "preferences", "type": "units"},
    )
)

await chroma_user_memory.add(
    MemoryContent(
        content="Meal recipe must be vegan",
        mime_type=MemoryMimeType.TEXT,
        metadata={"category": "preferences", "type": "dietary"},
    )
)


# Create assistant agent with ChromaDB memory
assistant_agent = AssistantAgent(
    name="assistant_agent",
    model_client=OpenAIChatCompletionClient(
        model="gpt-4o",
    ),
    tools=[get_weather],
    memory=[user_memory],
)

stream = assistant_agent.run_stream(task="What is the weather in New York?")
await Console(stream)

await user_memory.close()
---------- user ----------
What is the weather in New York?
---------- assistant_agent ----------
[MemoryContent(content='The weather should be in metric units', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None), MemoryContent(content='Meal recipe must be vegan', mime_type=<MemoryMimeType.TEXT: 'text/plain'>, metadata=None)]
---------- assistant_agent ----------
[FunctionCall(id='call_PKcfeEHXimGG2QOhJwXzCLuZ', arguments='{"city":"New York","units":"metric"}', name='get_weather')]
---------- assistant_agent ----------
[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', call_id='call_PKcfeEHXimGG2QOhJwXzCLuZ', is_error=False)]
---------- assistant_agent ----------
The weather in New York is 23 °C and Sunny.

请注意,您还可以序列化ChromaDBVectorMemory并将其保存到磁盘。

chroma_user_memory.dump_component().model_dump_json()
'{"provider":"autogen_ext.memory.chromadb.ChromaDBVectorMemory","component_type":"memory","version":1,"component_version":1,"description":"ChromaDB-based vector memory implementation with similarity search.","label":"ChromaDBVectorMemory","config":{"client_type":"persistent","collection_name":"preferences","distance_metric":"cosine","k":2,"score_threshold":0.4,"allow_reset":false,"tenant":"default_tenant","database":"default_database","persistence_path":"/Users/victordibia/.chromadb_autogen"}}'