API Reference
Complete API reference for AgentSea - REST API, WebSocket, and core classes.
REST API & Real-Time Streaming
AgentSea provides comprehensive HTTP REST API, Server-Sent Events (SSE), and WebSocket support for building web applications, mobile apps, and real-time interfaces.
Quick Setup
import { AgenticModule } from '@lov3kaizen/agentsea-nestjs';
@Module({
imports: [
AgenticModule.forRoot({
provider: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY,
enableRestApi: true, // Enable HTTP REST endpoints
enableWebSocket: true, // Enable WebSocket gateway
}),
],
})
export class AppModule {}HTTP REST Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /agents | List all registered agents |
| GET | /agents/:name | Get agent details and configuration |
| POST | /agents/:name/execute | Execute agent with input |
| POST | /agents/:name/stream | Stream agent response (SSE) |
| DELETE | /agents/:name/conversations/:id | Clear conversation history |
Server-Sent Events (SSE)
Stream agent responses in real-time with Server-Sent Events:
// Client-side SSE streaming
const response = await fetch('/agents/chat/stream', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Accept': 'text/event-stream',
},
body: JSON.stringify({ input: 'Hello!' }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// Parse SSE events: iteration, content, tool_calls, tool_result, done, error
console.log(chunk);
}WebSocket
Bidirectional real-time communication with Socket.IO:
import { io } from 'socket.io-client';
const socket = io('http://localhost:3000/agents');
// Execute agent
socket.emit('execute', {
agentName: 'customer-support',
input: 'Hello!',
});
// Listen for streaming events
socket.on('stream', (event) => {
if (event.type === 'content') {
console.log(event.content); // Real-time content
}
});
// Get agent info
socket.emit('getAgent', { agentName: 'customer-support' });
socket.on('agentInfo', (info) => console.log(info));
// List all agents
socket.emit('listAgents');
socket.on('agentList', (data) => console.log(data.agents));👥Crews
Crew
Multi-agent crew for coordinated task execution
Constructor
createCrew(config: CrewConfig)Methods
addTask(task: TaskConfig): void
Add a task to the crew queue
kickoff(): Promise<CrewResult>
Start crew execution and return final result
getProgress(): CrewProgress
Get current execution progress
Properties
| Property | Type | Description |
|---|---|---|
| name | string | Crew name identifier |
| delegationStrategy | DelegationStrategy | Task assignment strategy |
| agents | CrewAgent[] | Array of crew agents |
Pre-built Templates
Ready-to-use crew configurations
Constructor
Factory functionsMethods
createResearchCrew(options: ResearchCrewOptions): Crew
Create a research-focused crew with researcher and writer agents
createCodeReviewCrew(options: CodeReviewOptions): Crew
Create a code review crew with reviewer and security agents
createWritingCrew(options: WritingOptions): Crew
Create a content writing crew
createCustomerSupportCrew(options: SupportOptions): Crew
Create a customer support crew
🌐Gateway
Gateway
OpenAI-compatible LLM gateway with intelligent routing
Constructor
new Gateway(config: GatewayConfig)Methods
chat.completions.create(request): Promise<ChatCompletion>
Create chat completion with auto-routing
getMetrics(): GatewayMetrics
Get usage metrics (requests, costs, latency)
checkHealth(): Promise<HealthStatus>
Check health of all providers
shutdown(): Promise<void>
Gracefully shutdown gateway
Properties
| Property | Type | Description |
|---|---|---|
| providers | ProviderConfig[] | Configured LLM providers |
| routing | RoutingConfig | Routing strategy configuration |
| cache | CacheConfig | Response cache settings |
Virtual Models
Auto-routing model aliases
Constructor
Use as model nameMethods
model: "best"
Route to highest quality model
model: "cheapest"
Route to lowest cost model
model: "fastest"
Route to lowest latency provider
🛡️Guardrails
GuardrailsEngine
Safety and validation engine for AI inputs/outputs
Constructor
createGuardrailsEngine(config: GuardrailsConfig)Methods
registerGuard(guard: Guard): void
Register a guard instance
checkInput(input: string, context?: GuardContext): Promise<GuardResult>
Check input against all input guards
checkOutput(output: string, context?: GuardContext): Promise<GuardResult>
Check output against all output guards
Properties
| Property | Type | Description |
|---|---|---|
| failureMode | 'fail-fast' | 'fail-safe' | 'collect-all' | How to handle guard failures |
| defaultAction | 'allow' | 'block' | 'warn' | Default action when no guard blocks |
Built-in Guards
Pre-built safety guards
Constructor
new Guard(options)Methods
ToxicityGuard({ sensitivity: "low" | "medium" | "high" })
Detect toxic or harmful content
PIIGuard({ types: string[], maskingStrategy: "redact" | "mask" | "hash" })
Detect and mask personally identifiable information
PromptInjectionGuard({ sensitivity: "low" | "medium" | "high" })
Detect prompt injection attempts
SchemaGuard({ schema: ZodSchema })
Validate output against Zod schema
TokenBudgetGuard({ maxTokensPerRequest: number })
Enforce token limits
CostGuard({ maxCostPerRequest: number })
Enforce cost limits
📊Evaluate
EvaluationPipeline
Pipeline for running LLM evaluations
Constructor
new EvaluationPipeline(config: PipelineConfig)Methods
evaluate(options: EvaluateOptions): Promise<PipelineResult>
Run evaluation on a dataset
Properties
| Property | Type | Description |
|---|---|---|
| metrics | Metric[] | Evaluation metrics to use |
| parallelism | number | Number of parallel evaluations |
Built-in Metrics
Pre-built evaluation metrics
Constructor
new Metric(options)Methods
AccuracyMetric({ type: "exact" | "fuzzy" | "semantic" })
Measure accuracy against expected output
RelevanceMetric()
Measure relevance to input
CoherenceMetric()
Measure logical consistency
ToxicityMetric()
Detect harmful content
FaithfulnessMetric()
Measure factual accuracy (RAG)
LLM-as-Judge
Use LLMs to evaluate responses
Constructor
new Judge(config)Methods
RubricJudge({ provider, rubric })
Evaluate with custom rubric levels
ComparativeJudge({ provider, criteria })
Compare two responses head-to-head
🔍Embeddings
EmbeddingManager
Manage embedding lifecycle with caching and stores
Constructor
createEmbeddingManager(config: EmbeddingConfig)Methods
registerModel(provider: EmbeddingProvider, isDefault?: boolean): void
Register an embedding provider
embed(text: string): Promise<EmbeddingResult>
Embed a single text
embedBatch(texts: string[]): Promise<BatchResult>
Embed multiple texts
embedDocument(text: string, options: DocOptions): Promise<Chunk[]>
Chunk and embed a document
search(query: string, options: SearchOptions): Promise<SearchResult[]>
Search for similar content
similarity(text1: string, text2: string): Promise<number>
Calculate similarity between texts
Chunking Strategies
Document chunking implementations
Constructor
createChunker()Methods
createFixedChunker()
Fixed-size character chunking
createRecursiveChunker()
Recursive text splitting
createMarkdownChunker()
Markdown-aware chunking
createCodeChunker()
Code-aware chunking by functions
createSemanticChunker()
Semantic similarity-based chunking
Vector Stores
Vector storage backends
Constructor
createStore(config)Methods
createMemoryStore({ dimensions })
In-memory vector store
createPineconeStore({ apiKey, indexName })
Pinecone vector store
createChromaStore({ url, collectionName })
Chroma vector store
createQdrantStore({ url, collectionName })
Qdrant vector store
🖥️Surf (Browser Automation)
SurfAgent
Computer-use agent with Claude vision
Constructor
new SurfAgent(sessionId: string, backend: Backend, config: SurfConfig)Methods
execute(task: string): Promise<SurfResult>
Execute a natural language task
executeStream(task: string): AsyncIterable<SurfEvent>
Stream execution events
stop(): void
Stop current execution
getState(): SurfState
Get current agent state
Properties
| Property | Type | Description |
|---|---|---|
| maxSteps | number | Maximum execution steps |
| sandbox | SandboxConfig | Security sandbox settings |
Backends
Desktop/browser backends
Constructor
createBackend(config)Methods
createNativeBackend()
Native desktop backend (macOS, Linux, Windows)
new PuppeteerBackend({ headless, viewport })
Puppeteer browser backend
new DockerBackend({ image, resolution })
Docker container backend
Computer-Use Tools
8 built-in computer interaction tools
Constructor
createSurfTools(backend)Methods
screenshot
Take a screenshot
click({ x, y })
Click at coordinates
typeText({ text })
Type text
scroll({ direction, amount })
Scroll the screen
keyPress({ key })
Press a key
drag({ startX, startY, endX, endY })
Drag from one point to another
🤖Agent
Agent
Core agent class for executing tasks with LLMs
Constructor
new Agent(
config: AgentConfig,
provider: LLMProvider,
toolRegistry: ToolRegistry,
memory?: MemoryStore
)Methods
execute(prompt: string, context: AgentContext): Promise<AgentResponse>
Execute the agent with the given prompt and context
executeStream(prompt: string, context: AgentContext): AsyncIterable<StreamEvent>
Stream agent responses in real-time with events
formatResponse(response: AgentResponse): FormattedContent
Format agent response to specified output format
Properties
| Property | Type | Description |
|---|---|---|
| name | string | Agent name identifier |
| config | AgentConfig | Agent configuration |
| provider | LLMProvider | LLM provider instance |
ContentFormatter
Format agent output to text, markdown, HTML, or React
Constructor
Static utility classMethods
static format(content: string, format: OutputFormat, options?: FormatOptions): FormattedContent
Format content to specified output format
🔧Tools
ToolRegistry
Registry for managing agent tools
Constructor
new ToolRegistry()Methods
register(tool: Tool): void
Register a single tool
registerMany(tools: Tool[]): void
Register multiple tools at once
get(name: string): Tool | undefined
Get a tool by name
has(name: string): boolean
Check if a tool exists
list(): Tool[]
List all registered tools
unregister(name: string): void
Remove a tool from the registry
Tool
Tool interface for agent capabilities
Constructor
new Tool()Properties
| Property | Type | Description |
|---|---|---|
| name | string | Unique tool identifier |
| description | string | What the tool does |
| inputSchema | ZodSchema | Zod schema for input validation |
| execute | (input: any) => Promise<any> | Tool execution function |
🔄Workflows
WorkflowFactory
Factory for creating workflow instances
Constructor
Static factory classMethods
create(config: WorkflowConfig, provider: LLMProvider, toolRegistry: ToolRegistry): Workflow
Create a workflow from configuration
SequentialWorkflow
Execute agents one after another
Constructor
new SequentialWorkflow(agents: Agent[])Methods
execute(input: string, context: ExecutionContext): Promise<LLMResponse>
Execute workflow sequentially
ParallelWorkflow
Execute agents in parallel
Constructor
new ParallelWorkflow(agents: Agent[])Methods
execute(input: string, context: ExecutionContext): Promise<LLMResponse>
Execute workflow in parallel
💾Memory
BufferMemory
In-memory storage for conversation history
Constructor
new BufferMemory(maxMessages: number)Methods
save(conversationId: string, messages: Message[]): Promise<void>
Save messages to memory
load(conversationId: string): Promise<Message[]>
Load messages from memory
clear(conversationId: string): Promise<void>
Clear conversation history
RedisMemory
Persistent storage using Redis
Constructor
new RedisMemory(options: RedisMemoryOptions)Methods
save(conversationId: string, messages: Message[]): Promise<void>
Save messages to Redis
load(conversationId: string): Promise<Message[]>
Load messages from Redis
disconnect(): Promise<void>
Disconnect from Redis
SummaryMemory
Memory with automatic summarization of older messages
Constructor
new SummaryMemory(options: SummaryMemoryOptions)Methods
save(conversationId: string, messages: Message[]): Promise<void>
Save messages with automatic summarization
load(conversationId: string): Promise<Message[]>
Load messages including summaries
search(conversationId: string, query: string): Promise<Message[]>
Search messages by semantic similarity
TenantBufferMemory
Tenant-scoped in-memory storage
Constructor
new TenantBufferMemory(maxMessages: number)Methods
save(conversationId: string, messages: Message[], tenantId: string): Promise<void>
Save messages scoped to tenant
load(conversationId: string, tenantId: string): Promise<Message[]>
Load tenant-scoped messages
🔌MCP
MCPRegistry
Manage multiple MCP server connections
Constructor
new MCPRegistry()Methods
addServer(config: MCPServerConfig): Promise<MCPClient>
Connect to an MCP server
removeServer(name: string): Promise<void>
Disconnect from an MCP server
getTools(): Tool[]
Get all tools from all servers
getServerTools(serverName: string): Tool[]
Get tools from specific server
disconnectAll(): Promise<void>
Disconnect from all servers
📊Observability
Logger
Structured logging with Winston
Constructor
new Logger(options?: LoggerOptions)Methods
error(message: string, meta?: any): void
Log error message
warn(message: string, meta?: any): void
Log warning message
info(message: string, meta?: any): void
Log info message
debug(message: string, meta?: any): void
Log debug message
MetricsCollector
Performance metrics collection and aggregation
Constructor
new MetricsCollector()Methods
record(metrics: AgentMetrics): void
Record agent execution metrics
getAll(): AgentMetrics[]
Get all recorded metrics
getByAgent(agentName: string): AgentMetrics[]
Get metrics for specific agent
getByTimeRange(start: Date, end: Date): AgentMetrics[]
Get metrics within time range
getStats(agentName?: string): MetricsStats
Get aggregated statistics
subscribe(callback: (metrics: AgentMetrics) => void): () => void
Subscribe to new metrics events
Tracer
Distributed tracing for agent executions
Constructor
new Tracer()Methods
startSpan(name: string, context?: SpanContext): Span
Start a new trace span
endSpan(span: Span): void
End a trace span
🎙️Voice
VoiceAgent
Voice-enabled agent with speech-to-text and text-to-speech
Constructor
new VoiceAgent(
agent: Agent,
config: VoiceAgentConfig
)Methods
processVoice(audio: Buffer): Promise<VoiceMessage>
Process audio input and return voice response
speak(text: string): Promise<VoiceMessage>
Convert text to voice response
transcribe(audio: Buffer): Promise<STTResult>
Transcribe audio to text
synthesize(text: string): Promise<TTSResult>
Synthesize text to audio
synthesizeStream(text: string): AsyncIterable<Buffer>
Stream synthesized audio chunks
getHistory(): VoiceMessage[]
Get voice conversation history
saveAudio(audio: Buffer, path: string): Promise<void>
Save audio buffer to file
OpenAIWhisperProvider
OpenAI Whisper speech-to-text provider
Constructor
new OpenAIWhisperProvider(apiKey: string)Methods
transcribe(audio: Buffer, config?: STTConfig): Promise<STTResult>
Transcribe audio using Whisper
OpenAITTSProvider
OpenAI text-to-speech provider
Constructor
new OpenAITTSProvider(apiKey: string)Methods
synthesize(text: string, config?: TTSConfig): Promise<TTSResult>
Synthesize speech using OpenAI TTS
getVoices(): Promise<VoiceType[]>
Get available voice options
ElevenLabsTTSProvider
ElevenLabs high-quality text-to-speech
Constructor
new ElevenLabsTTSProvider(apiKey: string)Methods
synthesize(text: string, config?: TTSConfig): Promise<TTSResult>
Synthesize speech using ElevenLabs
synthesizeStream(text: string, config?: TTSConfig): AsyncIterable<Buffer>
Stream synthesized audio
💬Conversation
ConversationSchema
Define structured multi-step conversation flows
Constructor
new ConversationSchema(config: ConversationSchemaConfig)Methods
getState(): ConversationState
Get current conversation state
getCurrentStep(): ConversationStep
Get current step configuration
processResponse(response: string): ProcessResult
Process response and advance conversation
reset(): void
Reset conversation to initial state
ConversationManager
Manage AI-assisted structured conversations
Constructor
new ConversationManager(
schema: ConversationSchema,
agent: Agent
)Methods
start(): Promise<Message>
Start the conversation
processMessage(input: string): Promise<Message>
Process user input with AI assistance
getState(): ConversationState
Get current conversation state
getHistory(): Message[]
Get conversation history
export(): ConversationExport
Export conversation for persistence
import(data: ConversationExport): void
Import saved conversation
🏢Multi-Tenancy
TenantManager
Manage tenant lifecycle and isolation
Constructor
new TenantManager(storage: TenantStorage)Methods
createTenant(data: CreateTenantData): Promise<Tenant>
Create a new tenant
getTenant(id: string): Promise<Tenant | null>
Get tenant by ID
getTenantBySlug(slug: string): Promise<Tenant | null>
Get tenant by slug
updateTenant(id: string, data: UpdateTenantData): Promise<Tenant>
Update tenant settings
deleteTenant(id: string): Promise<void>
Delete a tenant
listTenants(options?: ListOptions): Promise<Tenant[]>
List all tenants with pagination
createApiKey(tenantId: string, data: CreateApiKeyData): Promise<TenantApiKey>
Create API key for tenant
revokeApiKey(keyId: string): Promise<void>
Revoke an API key
MemoryTenantStorage
In-memory tenant storage implementation
Constructor
new MemoryTenantStorage()Methods
createTenant(tenant: Tenant): Promise<Tenant>
Store a new tenant
getTenant(id: string): Promise<Tenant | null>
Retrieve tenant by ID
updateQuota(tenantId: string, quota: TenantQuota): Promise<void>
Update tenant quota usage
🛒ACP (Commerce)
ACPClient
Agentic Commerce Protocol client for e-commerce integration
Constructor
new ACPClient(config: ACPConfig)Methods
searchProducts(query: ACPProductSearchQuery): Promise<ACPProductSearchResult>
Search products in catalog
getProduct(productId: string): Promise<ACPProduct>
Get product details
createCart(customerId?: string): Promise<ACPCart>
Create a new shopping cart
addToCart(cartId: string, productId: string, quantity: number): Promise<ACPCart>
Add item to cart
createCheckoutSession(cartId: string): Promise<ACPCheckoutSession>
Create checkout session
processPayment(sessionId: string, paymentMethod: ACPPaymentMethod): Promise<ACPPaymentIntent>
Process payment for checkout
getOrder(orderId: string): Promise<ACPOrder>
Get order details
Properties
| Property | Type | Description |
|---|---|---|
| config | ACPConfig | Client configuration |
📘 TypeScript Support
AgentSea is fully typed with comprehensive TypeScript definitions. Types are available from the dedicated types package or re-exported from core:
Dedicated Types Package
npm install @lov3kaizen/agentsea-typesCore Agent Types
import type {
// Agent & Execution
AgentConfig,
AgentContext,
AgentResponse,
Message,
FormattedContent,
OutputFormat,
FormatOptions,
// Tools
Tool,
ToolCall,
ToolContext,
RetryConfig,
// Providers
LLMProvider,
ProviderConfig,
LLMResponse,
LLMStreamChunk,
ProviderInstanceConfig,
// Memory
MemoryConfig,
MemoryStore,
// Workflows
WorkflowType,
WorkflowConfig,
RoutingLogic,
RoutingRule,
ErrorHandlingStrategy,
// Streaming
StreamEvent,
// Observability
AgentMetrics,
SpanContext,
} from '@lov3kaizen/agentsea-types';
// Or from core: '@lov3kaizen/agentsea-core'Multi-Tenancy Types
import type {
Tenant,
TenantStatus,
TenantSettings,
TenantContext,
TenantApiKey,
TenantQuota,
TenantStorage,
TenantResolver,
} from '@lov3kaizen/agentsea-types';Voice & Speech Types
import type {
AudioFormat,
VoiceType,
STTConfig,
TTSConfig,
STTResult,
TTSResult,
STTProvider,
TTSProvider,
VoiceMessage,
VoiceAgentConfig,
} from '@lov3kaizen/agentsea-types';Protocol Types (MCP & ACP)
// Model Context Protocol
import type {
MCPServerConfig,
MCPTool,
MCPResource,
MCPPrompt,
} from '@lov3kaizen/agentsea-core';
// Agentic Commerce Protocol
import type {
ACPProduct,
ACPCart,
ACPCheckoutSession,
ACPOrder,
ACPCustomer,
} from '@lov3kaizen/agentsea-core';