Agents
Agents are the core building blocks of AgentSea. They represent autonomous AI entities that can reason, use tools, and maintain conversation context.
What is an Agent?
An agent in AgentSea is an intelligent entity powered by a Large Language Model (LLM) that can:
- Process natural language inputs and generate responses
- Use tools to perform actions (API calls, file operations, calculations, etc.)
- Maintain conversation history and context
- Make decisions about which tools to use and when
- Stream responses for real-time interactions
Creating an Agent
Here's a basic example of creating an agent:
import {
Agent,
AnthropicProvider,
ToolRegistry,
BufferMemory,
} from '@lov3kaizen/agentsea-core';
import type { AgentConfig, AgentContext, AgentResponse } from '@lov3kaizen/agentsea-types';
// Set up provider
const provider = new AnthropicProvider(process.env.ANTHROPIC_API_KEY);
// Set up tool registry
const toolRegistry = new ToolRegistry();
// Set up memory
const memory = new BufferMemory(50); // Keep last 50 messages
// Create agent
const agent = new Agent(
{
name: 'customer-support-agent',
description: 'A helpful customer support assistant',
model: 'claude-sonnet-4-20250514',
provider: 'anthropic',
systemPrompt: 'You are a friendly customer support agent...',
tools: [],
temperature: 0.7,
maxTokens: 2048,
},
provider,
toolRegistry,
memory,
);Agent Configuration
The agent configuration object accepts the following properties:
| Property | Type | Description |
|---|---|---|
| name | string | Unique identifier for the agent |
| model | string | LLM model name (e.g., claude-sonnet-4-20250514) |
| provider | string | 'anthropic', 'openai', 'ollama', 'llama-cpp', 'gpt4all', or 'huggingface' |
| systemPrompt | string | Instructions for the agent's behavior |
| tools | Tool[] | Array of tools the agent can use |
| temperature | number | Randomness (0.0-1.0). Lower = more focused |
| maxTokens | number | Maximum response length |
Executing an Agent
To execute an agent, call the execute() method with a prompt and context:
const response = await agent.execute(
'What is the weather like today?',
{
conversationId: 'user-123',
sessionData: { userId: '123', location: 'San Francisco' },
history: [],
},
);
console.log(response.content); // Agent's response
console.log(response.toolCalls); // Tools that were used
console.log(response.metadata); // Additional metadataStreaming Responses
For real-time interactions, use streaming to receive responses as they're generated:
const stream = await agent.stream(
'Write a story about a robot',
context,
);
for await (const chunk of stream) {
if (chunk.type === 'content') {
process.stdout.write(chunk.content);
} else if (chunk.type === 'tool_call') {
console.log('Using tool:', chunk.toolName);
}
}Agent with Tools
Agents become powerful when equipped with tools:
import {
Agent,
AnthropicProvider,
ToolRegistry,
calculatorTool,
httpRequestTool,
} from '@lov3kaizen/agentsea-core';
// Set up tools
const toolRegistry = new ToolRegistry();
toolRegistry.register(calculatorTool);
toolRegistry.register(httpRequestTool);
// Create agent with tools
const agent = new Agent(
{
name: 'data-analyst',
model: 'claude-sonnet-4-20250514',
provider: 'anthropic',
systemPrompt: 'You are a data analyst that can fetch and analyze data.',
tools: [calculatorTool, httpRequestTool],
},
new AnthropicProvider(),
toolRegistry,
);
// Agent can now use tools automatically
const response = await agent.execute(
'Fetch data from https://api.example.com/stats and calculate the average',
context,
);Memory Management
Agents can use different memory stores to maintain context:
import {
Agent,
BufferMemory, // In-memory, simple
RedisMemory, // Persistent, scalable
SummaryMemory, // Auto-summarization
} from '@lov3kaizen/agentsea-core';
// Option 1: Buffer memory (default)
const bufferMemory = new BufferMemory(100); // Keep last 100 messages
// Option 2: Redis memory (persistent)
const redisMemory = new RedisMemory({
url: 'redis://localhost:6379',
ttl: 86400, // 24 hours
});
// Option 3: Summary memory (auto-summarizes old messages)
const summaryMemory = new SummaryMemory(
new AnthropicProvider(),
{
maxMessages: 20,
summaryModel: 'claude-haiku-4-20250514',
},
);
const agent = new Agent(config, provider, toolRegistry, redisMemory);Multiple Providers
Switch between Anthropic Claude and OpenAI easily:
import { Agent, AnthropicProvider, OpenAIProvider } from '@lov3kaizen/agentsea-core';
// Anthropic Claude
const claudeAgent = new Agent(
{
name: 'claude-agent',
model: 'claude-sonnet-4-20250514',
provider: 'anthropic',
},
new AnthropicProvider(),
toolRegistry,
);
// OpenAI GPT
const gptAgent = new Agent(
{
name: 'gpt-agent',
model: 'gpt-4-turbo-preview',
provider: 'openai',
},
new OpenAIProvider(),
toolRegistry,
);Local & Open Source Providers
Run agents with local LLMs for privacy, cost savings, and offline operation:
import { Agent, OllamaProvider, LlamaCppProvider } from '@lov3kaizen/agentsea-core';
// Ollama (easiest for local models)
const ollamaAgent = new Agent(
{
name: 'local-agent',
model: 'llama3.2',
provider: 'ollama',
systemPrompt: 'You are a helpful assistant running locally.',
},
new OllamaProvider({
baseUrl: 'http://localhost:11434',
model: 'llama3.2'
}),
toolRegistry,
);
// llama.cpp (fastest performance)
const llamaCppAgent = new Agent(
{
name: 'fast-local-agent',
model: 'llama-3.2-3b-q4_k_m',
provider: 'llama-cpp',
},
new LlamaCppProvider({
baseUrl: 'http://localhost:8080'
}),
toolRegistry,
);
// Response times are faster with no API latency!
const response = await ollamaAgent.execute('Hello!');Benefits of Local Providers:
- 🔒 Data privacy - nothing leaves your machine
- 💰 No API costs - unlimited usage
- âš¡ Low latency - no network round trips
- 🔌 Offline capable - works without internet
Best Practices
- System Prompts: Write clear, specific system prompts that define the agent's role and behavior
- Temperature: Use lower temperature (0.0-0.3) for factual tasks, higher (0.7-1.0) for creative tasks
- Memory: Choose memory store based on your needs (Buffer for simple, Redis for production, Summary for long conversations)
- Tools: Only provide tools that are relevant to the agent's purpose
- Error Handling: Always wrap agent execution in try-catch blocks
- Streaming: Use streaming for better user experience in interactive applications