Memory
Memory stores enable agents to maintain conversation context across multiple interactions, providing continuity and personalization.
Memory Types
AgentSea provides four memory store implementations:
Buffer Memory
In-memory storage that keeps the most recent N messages
Redis Memory
Persistent storage using Redis for production deployments
Summary Memory
Automatically summarizes old messages to reduce token usage
Tenant Buffer Memory
Multi-tenant aware memory with complete data isolation
Buffer Memory
Simple in-memory storage that keeps the most recent messages. Fast and easy to use.
import { Agent, BufferMemory, AnthropicProvider, ToolRegistry } from '@lov3kaizen/agentsea-core';
// Create buffer memory with max 50 messages
const memory = new BufferMemory(50);
// Create agent with memory
const agent = new Agent(
{
name: 'chat-agent',
model: 'claude-sonnet-4-20250514',
provider: 'anthropic',
systemPrompt: 'You are a helpful assistant.',
},
new AnthropicProvider(),
new ToolRegistry(),
memory, // Pass memory to agent
);
// First conversation
await agent.execute('My name is Alice', {
conversationId: 'user-123',
sessionData: {},
history: [],
});
// Agent remembers from previous message
await agent.execute('What is my name?', {
conversationId: 'user-123',
sessionData: {},
history: [],
});
// Response: "Your name is Alice"Redis Memory
Persistent storage using Redis. Ideal for production applications with multiple servers.
import { Agent, RedisMemory, AnthropicProvider, ToolRegistry } from '@lov3kaizen/agentsea-core';
// Create Redis memory
const memory = new RedisMemory({
url: process.env.REDIS_URL || 'redis://localhost:6379',
ttl: 86400, // 24 hours (optional)
keyPrefix: 'agent:', // Optional key prefix
});
// Create agent with Redis memory
const agent = new Agent(
config,
new AnthropicProvider(),
new ToolRegistry(),
memory,
);
// Memory persists across server restarts
await agent.execute('Remember: my favorite color is blue', context);
// Later, even after server restart
await agent.execute('What is my favorite color?', context);
// Response: "Your favorite color is blue"
// Cleanup when done
await memory.disconnect();Summary Memory
Automatically summarizes old messages to maintain context while reducing token usage.
import { Agent, SummaryMemory, AnthropicProvider, ToolRegistry } from '@lov3kaizen/agentsea-core';
// Create summary memory
const memory = new SummaryMemory(
new AnthropicProvider(),
{
maxMessages: 20, // Keep last 20 messages
summaryModel: 'claude-haiku-4-20250514', // Use fast model for summaries
summaryPrompt: 'Summarize the key points from this conversation:', // Optional
},
);
// Create agent
const agent = new Agent(
config,
new AnthropicProvider(),
new ToolRegistry(),
memory,
);
// As conversation grows, old messages are automatically summarized
// This keeps token usage low while maintaining contextMemory Interface
All memory stores implement the same interface:
interface MemoryStore {
// Save messages for a conversation
save(conversationId: string, messages: Message[]): Promise<void>;
// Load messages for a conversation
load(conversationId: string): Promise<Message[]>;
// Clear messages for a conversation
clear(conversationId: string): Promise<void>;
// Optional: Check if conversation exists
exists?(conversationId: string): Promise<boolean>;
}Custom Memory Store
Create your own memory store by implementing the MemoryStore interface:
import { MemoryStore, Message } from '@lov3kaizen/agentsea-core';
export class DatabaseMemory implements MemoryStore {
private db: Database;
constructor(databaseConnection: Database) {
this.db = databaseConnection;
}
async save(conversationId: string, messages: Message[]): Promise<void> {
await this.db.conversations.upsert({
where: { id: conversationId },
update: {
messages: messages,
updatedAt: new Date(),
},
create: {
id: conversationId,
messages: messages,
createdAt: new Date(),
},
});
}
async load(conversationId: string): Promise<Message[]> {
const conversation = await this.db.conversations.findUnique({
where: { id: conversationId },
});
return conversation?.messages || [];
}
async clear(conversationId: string): Promise<void> {
await this.db.conversations.update({
where: { id: conversationId },
data: { messages: [] },
});
}
async exists(conversationId: string): Promise<boolean> {
const count = await this.db.conversations.count({
where: { id: conversationId },
});
return count > 0;
}
}
// Use custom memory
const memory = new DatabaseMemory(prismaClient);
const agent = new Agent(config, provider, toolRegistry, memory);Tenant Buffer Memory
Multi-tenant aware memory that isolates conversation data by tenant. Essential for multi-tenant applications.
import { Agent, TenantBufferMemory, AnthropicProvider, ToolRegistry } from '@lov3kaizen/agentsea-core';
// Create tenant-aware memory
const memory = new TenantBufferMemory({ maxMessages: 50 });
// Create agent with tenant memory
const agent = new Agent(
{
name: 'support-agent',
model: 'claude-sonnet-4-20250514',
provider: 'anthropic',
systemPrompt: 'You are a customer support assistant.',
},
new AnthropicProvider(),
new ToolRegistry(),
{ type: 'custom', store: memory },
);
// Save conversation for specific tenant
await memory.save('tenant-123', 'conv-456', [
{ role: 'user', content: 'How do I upgrade my plan?' },
{ role: 'assistant', content: 'I can help you upgrade your plan...' },
]);
// Load conversation only for this tenant
const history = await memory.load('tenant-123', 'conv-456');
// Complete tenant isolation - Tenant A cannot access Tenant B's data
const tenantAHistory = await memory.load('tenant-a', 'conv-1'); // ✅ Returns tenant A's data
const tenantBHistory = await memory.load('tenant-b', 'conv-1'); // ✅ Returns tenant B's data (different)
// Clear tenant-specific conversation
await memory.clear('tenant-123', 'conv-456');
// Get all conversation IDs for a tenant
const conversationIds = await memory.getConversationIds('tenant-123');🏢 Multi-Tenancy Support
TenantBufferMemory provides complete data isolation between tenants, ensuring that one tenant's conversations are never accessible to another. Use this with the Multi-Tenancy system for production SaaS applications.
Managing Conversation History
Control how conversation history is managed:
// Clear conversation history
await memory.clear('conversation-123');
// Load existing history
const history = await memory.load('conversation-123');
// Pass history directly to agent
const response = await agent.execute('Hello', {
conversationId: 'user-123',
sessionData: {},
history: history, // Use pre-loaded history
});
// Check if conversation exists
const exists = await memory.exists('conversation-123');
if (!exists) {
console.log('Starting new conversation');
}Session Data
Store additional context alongside messages:
const context = {
conversationId: 'user-123',
sessionData: {
userId: '12345',
userName: 'Alice',
preferences: {
language: 'en',
timezone: 'America/Los_Angeles',
},
metadata: {
source: 'web',
device: 'desktop',
},
},
history: [],
};
const response = await agent.execute('What time is it?', context);
// Agent can reference session data in responses
// "Based on your timezone (America/Los_Angeles), it is currently..."Memory Best Practices
Choosing the Right Memory Store
- BufferMemory: Development, testing, single-server apps
- RedisMemory: Production, multi-server, high availability
- SummaryMemory: Long conversations, token optimization
- TenantBufferMemory: Multi-tenant SaaS applications requiring data isolation
Memory Configuration
- Set appropriate message limits to balance context and token usage
- Use TTL in Redis to automatically expire old conversations
- Implement cleanup jobs to remove inactive conversations
- Monitor memory usage in production environments
Privacy and Security
- Encrypt sensitive data before storing in memory
- Implement data retention policies
- Provide user controls to delete their conversation history
- Use conversation IDs that don't expose user information
Comparison Table
| Feature | Buffer | Redis | Summary | Tenant Buffer |
|---|---|---|---|---|
| Persistence | ❌ No | ✅ Yes | ⚠️ Depends | ❌ No |
| Speed | ⚡ Fastest | ⚡ Fast | 🐢 Slower | ⚡ Fastest |
| Scalability | ❌ Single server | ✅ Multi-server | ⚠️ Depends | ✅ Multi-tenant |
| Token Usage | ⚠️ Medium | ⚠️ Medium | ✅ Optimized | ⚠️ Medium |
| Setup | ✅ Simple | ⚠️ Redis required | ⚠️ Extra LLM calls | ✅ Simple |
| Tenant Isolation | ❌ No | ⚠️ Manual | ⚠️ Manual | ✅ Built-in |