v0.5.2 release - Contributors, Sponsors and Enquiries are most welcome 😌

Examples

Explore practical examples to learn how to build powerful agentic AI applications with AgentSea.

💬

Basic Chatbot

Create a simple conversational agent with memory and tool calling capabilities.

AgentMemoryTools
import {
  Agent,
  AnthropicProvider,
  ToolRegistry,
  BufferMemory,
  calculatorTool,
} from '@lov3kaizen/agentsea-core';

const agent = new Agent(
  {
    name: 'chatbot',
    model: 'claude-sonnet-4-20250514',
    provider: 'anthropic',
    systemPrompt: 'You are a helpful assistant.',
    tools: [calculatorTool],
  },
  new AnthropicProvider(),
  new ToolRegistry(),
  new BufferMemory(50),
);

const response = await agent.execute(
  'What is 42 * 58?',
  { conversationId: 'user-123', sessionData: {}, history: [] }
);
📝

Content Pipeline

Sequential workflow for research, writing, and editing content.

WorkflowSequentialMulti-Agent
import { WorkflowFactory } from '@lov3kaizen/agentsea-core';

const workflow = WorkflowFactory.create(
  {
    name: 'content-pipeline',
    type: 'sequential',
    agents: [
      {
        name: 'researcher',
        systemPrompt: 'Research and gather information.',
        tools: [httpRequestTool],
      },
      {
        name: 'writer',
        systemPrompt: 'Write comprehensive content.',
      },
      {
        name: 'editor',
        systemPrompt: 'Edit and polish for publication.',
      },
    ],
  },
  provider,
  toolRegistry,
);

const result = await workflow.execute(
  'Write an article about AI agents',
  context
);
🔌

MCP Integration

Connect to MCP servers for filesystem, GitHub, and more.

MCPToolsIntegration
import { MCPRegistry } from '@lov3kaizen/agentsea-core';

const mcpRegistry = new MCPRegistry();

await mcpRegistry.addServer({
  name: 'filesystem',
  command: 'npx',
  args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
  transport: 'stdio',
});

await mcpRegistry.addServer({
  name: 'github',
  command: 'npx',
  args: ['-y', '@modelcontextprotocol/server-github'],
  transport: 'stdio',
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

const tools = mcpRegistry.getTools();
// Tools: filesystem:read_file, github:create_issue, etc.
🎯

Customer Support Router

Supervisor workflow that routes requests to specialized agents.

WorkflowSupervisorRouting
import { WorkflowFactory } from '@lov3kaizen/agentsea-core';

const workflow = WorkflowFactory.create(
  {
    name: 'support-router',
    type: 'supervisor',
    supervisor: {
      name: 'router',
      systemPrompt: 'Route to: technical-support, billing, or general',
    },
    agents: [
      {
        name: 'technical-support',
        systemPrompt: 'Provide technical support.',
        tools: [databaseQueryTool],
      },
      {
        name: 'billing',
        systemPrompt: 'Handle billing inquiries.',
      },
      {
        name: 'general',
        systemPrompt: 'General support.',
      },
    ],
  },
  provider,
  toolRegistry,
);
📊

Data Analysis Pipeline

Parallel workflow for multi-perspective data analysis.

WorkflowParallelAnalysis
import { WorkflowFactory } from '@lov3kaizen/agentsea-core';

const workflow = WorkflowFactory.create(
  {
    name: 'analysis',
    type: 'parallel',
    agents: [
      {
        name: 'sentiment',
        systemPrompt: 'Analyze sentiment.',
      },
      {
        name: 'keywords',
        systemPrompt: 'Extract keywords.',
      },
      {
        name: 'summary',
        systemPrompt: 'Summarize content.',
      },
    ],
  },
  provider,
  toolRegistry,
);

const result = await workflow.execute(
  'Analyze this product review: ...',
  context
);
🏢

NestJS Application

Enterprise-ready agent service with NestJS integration.

NestJSAPIProduction
import { Module, Controller, Injectable } from '@nestjs/common';
import { AgenticModule } from '@lov3kaizen/agentsea-nestjs';

@Module({
  imports: [
    AgenticModule.forRoot({
      provider: new AnthropicProvider(),
      defaultConfig: {
        model: 'claude-sonnet-4-20250514',
        provider: 'anthropic',
      },
    }),
  ],
})
export class AppModule {}

@Injectable()
export class ChatService {
  async chat(message: string) {
    return this.agent.execute(message, context);
  }
}

@Controller('chat')
export class ChatController {
  @Post()
  async chat(@Body('message') message: string) {
    return this.chatService.chat(message);
  }
}
🦙

Local Agent with Ollama

Run agents completely offline with local models for privacy and cost savings.

LocalOllamaPrivacy
import {
  Agent,
  OllamaProvider,
  ToolRegistry,
  BufferMemory,
  calculatorTool,
} from '@lov3kaizen/agentsea-core';

// No API key needed - runs locally!
const provider = new OllamaProvider({
  baseUrl: 'http://localhost:11434',
  model: 'llama3.2' // or mistral, gemma2, etc.
});

const agent = new Agent(
  {
    name: 'local-assistant',
    model: 'llama3.2',
    provider: 'ollama',
    systemPrompt: 'You are a helpful assistant running locally.',
    tools: [calculatorTool],
    temperature: 0.7,
  },
  provider,
  new ToolRegistry(),
  new BufferMemory(50),
);

// Completely private - no data leaves your machine
const response = await agent.execute(
  'What is 156 * 89?',
  { conversationId: 'local-user', sessionData: {}, history: [] }
);

High-Performance Local Workflow

Multi-agent workflow using llama.cpp for maximum speed.

Localllama.cppWorkflow
import { WorkflowFactory, LlamaCppProvider } from '@lov3kaizen/agentsea-core';

// Ultra-fast inference with llama.cpp
const provider = new LlamaCppProvider({
  baseUrl: 'http://localhost:8080',
  model: 'llama-3.2-3b-q4_k_m'
});

const workflow = WorkflowFactory.create(
  {
    name: 'local-analysis',
    type: 'parallel',
    agents: [
      {
        name: 'summarizer',
        model: 'llama-3.2-3b-q4_k_m',
        provider: 'llama-cpp',
        systemPrompt: 'Summarize the main points.',
      },
      {
        name: 'sentiment',
        model: 'llama-3.2-3b-q4_k_m',
        provider: 'llama-cpp',
        systemPrompt: 'Analyze sentiment.',
      },
      {
        name: 'keywords',
        model: 'llama-3.2-3b-q4_k_m',
        provider: 'llama-cpp',
        systemPrompt: 'Extract key terms.',
      },
    ],
  },
  provider,
  new ToolRegistry(),
);

// All agents run in parallel locally - blazing fast!
const result = await workflow.execute(
  'Analyze: The product is great but expensive.',
  context
);
📈

Observability Setup

Monitor agents with logging, metrics, and distributed tracing.

ObservabilityMonitoringMetrics
import { Logger, globalMetrics, globalTracer } from '@lov3kaizen/agentsea-core';

const logger = new Logger({ level: 'info' });

// Log execution
logger.info('Agent started', { agentName: 'chat-agent' });

// Track metrics
globalMetrics.recordCounter('agent.executions', 1, {
  agentName: 'chat-agent',
  status: 'success',
});

globalMetrics.recordHistogram('agent.latency', 1250, {
  agentName: 'chat-agent',
});

// Create trace
const trace = globalTracer.createTrace('user-request');
const span = trace.createSpan('agent-execution');
// ... execute agent
span.end();

// Export to monitoring service
globalMetrics.subscribe((metric) => {
  sendToPrometheus(metric);
});
🔐

Advanced Memory Management

Use Redis and summary memory for persistent, scalable storage.

MemoryRedisProduction
import { RedisMemory, SummaryMemory } from '@lov3kaizen/agentsea-core';

// Redis for persistence
const redisMemory = new RedisMemory({
  url: 'redis://localhost:6379',
  ttl: 86400, // 24 hours
  keyPrefix: 'agent:',
});

// Summary for long conversations
const summaryMemory = new SummaryMemory(
  new AnthropicProvider(),
  {
    maxMessages: 20,
    summaryModel: 'claude-haiku-4-20250514',
  },
);

const agent = new Agent(
  config,
  provider,
  toolRegistry,
  redisMemory, // or summaryMemory
);

// Memory persists across restarts
await agent.execute('Remember: my name is Alice', context);
🏢

Multi-Tenancy Support

Build SaaS applications with complete tenant isolation and API key authentication.

Multi-TenancySaaSEnterprise
import {
  TenantManager,
  MemoryTenantStorage,
  TenantBufferMemory,
  Agent,
} from '@lov3kaizen/agentsea-core';

// Initialize multi-tenancy
const storage = new MemoryTenantStorage();
const tenantManager = new TenantManager(storage);

// Create tenant
const tenant = await tenantManager.createTenant({
  name: 'Acme Corp',
  slug: 'acme-corp',
  settings: {
    maxAgents: 10,
    maxConversations: 100,
    allowedProviders: ['anthropic'],
  },
});

// Generate API key
const apiKey = await tenantManager.generateApiKey(tenant.id);

// Tenant-isolated memory
const memory = new TenantBufferMemory();

// Execute with tenant context
const response = await agent.execute(message, {
  conversationId: 'conv-1',
  sessionData: { tenantId: tenant.id },
  history: await memory.load(tenant.id, 'conv-1'),
});

// Track usage
await tenantManager.recordQuotaUsage(tenant.id, {
  resource: 'api_calls',
  amount: 1,
  period: 'hourly',
});
🎙️

Voice-Enabled Agent

Create agents with built-in speech-to-text and text-to-speech capabilities.

VoiceSTTTTS
import {
  VoiceAgent,
  Agent,
  OpenAIWhisperProvider,
  OpenAITTSProvider,
  AnthropicProvider,
  ToolRegistry,
} from '@lov3kaizen/agentsea-core';
import fs from 'fs';

// Create base agent
const agent = new Agent(
  {
    name: 'voice-assistant',
    model: 'claude-sonnet-4-20250514',
    provider: 'anthropic',
    systemPrompt: 'You are a helpful voice assistant.',
  },
  new AnthropicProvider(),
  new ToolRegistry(),
);

// Wrap with voice capabilities
const voiceAgent = new VoiceAgent(agent, {
  sttProvider: new OpenAIWhisperProvider(),
  ttsProvider: new OpenAITTSProvider({ voice: 'nova' }),
  autoSpeak: true, // Automatically convert responses to speech
});

// Voice in → Voice out
const audioFile = fs.readFileSync('./user-question.mp3');
const result = await voiceAgent.processVoice(audioFile, context);

// Save voice response
fs.writeFileSync('./agent-response.mp3', result.audio);
👥

Multi-Agent Crew

Create a research crew with role-based agents and delegation strategies.

CrewsMulti-AgentRoles
import { createCrew, createResearchCrew, ResearchTasks } from '@lov3kaizen/agentsea-crews';

// Option 1: Use pre-built template
const researchCrew = createResearchCrew({
  depth: 'deep',
  includeWriter: true,
});

researchCrew.addTask(ResearchTasks.research('AI agents', 'deep'));
researchCrew.addTask(ResearchTasks.writeReport('AI Agents Analysis', 'executive'));

const result = await researchCrew.kickoff();

// Option 2: Create custom crew
const customCrew = createCrew({
  name: 'analysis-crew',
  delegationStrategy: 'best-match', // or 'consensus', 'auction', 'hierarchical'
  agents: [
    {
      name: 'analyst',
      role: {
        name: 'Data Analyst',
        capabilities: [{ name: 'analysis', proficiency: 'expert' }],
        goals: ['Provide accurate analysis'],
      },
      model: 'claude-sonnet-4-20250514',
      provider: 'anthropic',
    },
  ],
});
🌐

LLM Gateway

OpenAI-compatible gateway with intelligent routing and cost optimization.

GatewayRoutingCost
import { Gateway, createHTTPServer, startServer } from '@lov3kaizen/agentsea-gateway';

const gateway = new Gateway({
  providers: [
    { name: 'openai', apiKey: process.env.OPENAI_API_KEY, models: ['gpt-4o'] },
    { name: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY, models: ['claude-3-5-sonnet'] },
  ],
  routing: { strategy: 'cost-optimized' },
  cache: { enabled: true, ttl: 3600 },
});

// Use virtual models for auto-routing
const response = await gateway.chat.completions.create({
  model: 'cheapest', // or 'best', 'fastest'
  messages: [{ role: 'user', content: 'Hello!' }],
});

console.log(response._gateway); // { provider, cost, latencyMs }

// Or run as HTTP server
const app = createHTTPServer({ gateway });
startServer(app, { port: 3000 });
🛡️

Safety Guardrails

Add content safety, prompt injection detection, and PII filtering.

GuardrailsSafetySecurity
import {
  createGuardrailsEngine,
  ToxicityGuard,
  PIIGuard,
  PromptInjectionGuard,
} from '@lov3kaizen/agentsea-guardrails';

const engine = createGuardrailsEngine({
  guards: [
    { name: 'toxicity', enabled: true, type: 'input', action: 'block' },
    { name: 'pii', enabled: true, type: 'both', action: 'transform' },
    { name: 'prompt-injection', enabled: true, type: 'input', action: 'block' },
  ],
  failureMode: 'fail-fast',
});

engine.registerGuard(new ToxicityGuard({ sensitivity: 'medium' }));
engine.registerGuard(new PIIGuard({ types: ['email', 'phone'], maskingStrategy: 'redact' }));
engine.registerGuard(new PromptInjectionGuard({ sensitivity: 'high' }));

// Check input before sending to LLM
const result = await engine.checkInput(userMessage, { sessionId: 'user-1' });

if (result.passed) {
  const response = await agent.execute(result.transformedContent || userMessage);
  // Check output before returning to user
  const outputCheck = await engine.checkOutput(response.content);
}
📊

LLM Evaluation Pipeline

Evaluate LLM quality with metrics, LLM-as-Judge, and human feedback.

EvaluationMetricsQuality
import {
  EvaluationPipeline,
  AccuracyMetric,
  RelevanceMetric,
  RubricJudge,
  EvalDataset,
} from '@lov3kaizen/agentsea-evaluate';

// Create evaluation pipeline
const pipeline = new EvaluationPipeline({
  metrics: [
    new AccuracyMetric({ type: 'fuzzy' }),
    new RelevanceMetric(),
  ],
  parallelism: 5,
});

// Create test dataset
const dataset = new EvalDataset({
  items: [
    { id: '1', input: 'What is the capital of France?', expectedOutput: 'Paris' },
    { id: '2', input: 'What is 2 + 2?', expectedOutput: '4' },
  ],
});

// Run evaluation
const results = await pipeline.evaluate({
  dataset,
  generateFn: async (input) => await agent.execute(input),
});

console.log(results.summary); // { passRate: 0.95, avgScore: 0.87 }

// LLM-as-Judge for subjective evaluation
const judge = new RubricJudge({
  provider: anthropicProvider,
  rubric: {
    criteria: 'Response Quality',
    levels: [
      { score: 1, description: 'Poor' },
      { score: 5, description: 'Excellent' },
    ],
  },
});
🔍

Vector Embeddings

Multi-provider embeddings with chunking, caching, and vector stores.

EmbeddingsRAGSearch
import {
  createEmbeddingManager,
  createOpenAIProvider,
  createRecursiveChunker,
  createMemoryStore,
  createMemoryCache,
} from '@lov3kaizen/agentsea-embeddings';

const manager = createEmbeddingManager({
  defaultModel: 'text-embedding-3-small',
  defaultProvider: 'openai',
});

manager.registerModel(createOpenAIProvider({ apiKey: process.env.OPENAI_API_KEY! }), true);
manager.setChunker(createRecursiveChunker());
manager.setStore(createMemoryStore({ dimensions: 1536 }));
manager.setCache(createMemoryCache({ maxEntries: 10000 }));

// Embed a document
const chunks = await manager.embedDocument(longDocument, {
  documentId: 'doc-1',
  type: 'markdown',
});

// Semantic search
const results = await manager.search('What is the main topic?', {
  topK: 5,
  minScore: 0.7,
});

// Check similarity
const score = await manager.similarity('hello world', 'hi there');
🖥️

Browser Automation

Control desktop and browser with Claude vision for computer-use tasks.

SurfAutomationVision
import { SurfAgent, createNativeBackend, PuppeteerBackend } from '@lov3kaizen/agentsea-surf';

// Use native desktop
const nativeBackend = createNativeBackend();
await nativeBackend.connect();

// Or use Puppeteer for web
const browserBackend = new PuppeteerBackend({
  headless: false,
  viewport: { width: 1920, height: 1080 },
});
await browserBackend.connect();

// Create agent with vision
const agent = new SurfAgent('session-1', browserBackend, {
  maxSteps: 20,
  vision: { model: 'claude-sonnet-4-20250514', maxTokens: 4096 },
  sandbox: {
    enabled: true,
    maxActionsPerMinute: 60,
    blockedDomains: ['malicious-site.com'],
  },
});

// Execute natural language task
const result = await agent.execute('Open google.com and search for weather');

// Or stream events
for await (const event of agent.executeStream('Fill out the form')) {
  if (event.type === 'action') console.log('Action:', event.action.description);
  if (event.type === 'complete') console.log('Done:', event.response);
}

Ready to Get Started?

Check out our comprehensive documentation to learn more about building with AgentSea.