v0.5.2 release - Contributors, Sponsors and Enquiries are most welcome 😌

Per-Model Type Safety

Get compile-time TypeScript errors for invalid model options. Inspired by TanStack AI.

🎯 Why Per-Model Type Safety?

Different AI models have different capabilities. o1-mini doesn't support tools. o1 doesn't support system prompts. Claude 3 Haiku doesn't support extended thinking.

Without type safety, you discover these issues at runtime. With per-model type safety, TypeScript catches them at compile time with helpful error messages.

Quick Start

typescript
import { anthropic, openai, createProvider } from '@lov3kaizen/agentsea-core';

// ✅ Valid: Claude 3.5 Sonnet supports tools and extended thinking
const claudeConfig = anthropic('claude-3-5-sonnet-20241022', {
  tools: [myTool],
  systemPrompt: 'You are helpful',
  thinking: { type: 'enabled', budgetTokens: 10000 },
  temperature: 0.7,
});

// ✅ Valid: o1 supports tools but NOT system prompts
const o1Config = openai('o1', {
  tools: [myTool],
  reasoningEffort: 'high',
  // systemPrompt: '...' // ❌ TypeScript error!
});

// ❌ TypeScript error: o1-mini doesn't support tools
const o1MiniConfig = openai('o1-mini', {
  // tools: [myTool], // Error: 'tools' does not exist in type
  reasoningEffort: 'medium',
});

// Create type-safe providers
const provider = createProvider(claudeConfig);
console.log('Supports vision:', provider.supportsCapability('vision')); // true

Config Builders

AgentSea provides type-safe config builder functions for each provider:

Anthropic

typescript
import { anthropic } from '@lov3kaizen/agentsea-core';

// Claude 3.5 Sonnet - supports EVERYTHING
const sonnetConfig = anthropic('claude-3-5-sonnet-20241022', {
  tools: [myTool],
  systemPrompt: 'You are a helpful assistant',
  thinking: { type: 'enabled', budgetTokens: 10000 }, // Extended thinking
  temperature: 0.7,
  maxTokens: 4096,
  providerOptions: {
    metadata: { userId: 'user-123' },
    betas: ['computer-use-2024-10-22'],
  },
});

// Claude 3 Haiku - NO extended thinking
const haikuConfig = anthropic('claude-3-haiku-20240307', {
  tools: [myTool],
  systemPrompt: 'You are fast',
  // thinking: { ... } // ❌ TypeScript error! Haiku doesn't support thinking
});

Supported Anthropic Models: claude-3-5-sonnet-*, claude-3-5-haiku-*, claude-3-opus-*, claude-3-sonnet-*, claude-3-haiku-*, claude-opus-4-*, claude-sonnet-4-*

OpenAI

typescript
import { openai } from '@lov3kaizen/agentsea-core';

// GPT-4o - supports tools, system prompts, structured output
const gpt4oConfig = openai('gpt-4o', {
  tools: [myTool],
  systemPrompt: 'You are helpful',
  temperature: 0.8,
  providerOptions: {
    responseFormat: { type: 'json_object' },
    seed: 42,
    parallelToolCalls: true,
  },
});

// o1 - supports tools and reasoning, but NOT system prompts
const o1Config = openai('o1', {
  tools: [myTool],
  reasoningEffort: 'high',
  // systemPrompt: '...' // ❌ TypeScript error!
});

// o1-mini - NO tools, NO system prompts
const o1MiniConfig = openai('o1-mini', {
  reasoningEffort: 'medium',
  // tools: [...] // ❌ TypeScript error!
  // systemPrompt: '...' // ❌ TypeScript error!
});

// o3-mini - supports tools but NOT system prompts
const o3MiniConfig = openai('o3-mini', {
  tools: [myTool],
  reasoningEffort: 'high',
});

Supported OpenAI Models: gpt-4o*, gpt-4-turbo*, gpt-4, gpt-3.5-turbo*, o1, o1-mini, o1-preview, o3-mini

Gemini

typescript
import { gemini } from '@lov3kaizen/agentsea-core';

// Gemini 1.5 Pro - supports everything
const geminiConfig = gemini('gemini-1.5-pro', {
  tools: [myTool],
  systemPrompt: 'You are helpful',
  topK: 40,
  temperature: 0.9,
  providerOptions: {
    safetySettings: [
      { category: 'HARM_CATEGORY_HARASSMENT', threshold: 'BLOCK_MEDIUM_AND_ABOVE' },
    ],
  },
});

Ollama (Local)

typescript
import { ollama } from '@lov3kaizen/agentsea-core';

// Ollama - dynamic models (less strict typing)
const ollamaConfig = ollama('llama3.2', {
  tools: [myTool],
  systemPrompt: 'You are helpful',
  temperature: 0.7,
  providerOptions: {
    numCtx: 4096,
    numGpu: 1,
  },
});

Ollama models are dynamic (user-defined), so type safety is less strict. Tool support depends on the specific model being used.

Model Capabilities Reference

Anthropic Models

ModelToolsVisionThinkingContextMax Output
claude-3-5-sonnet-*200K8,192
claude-3-5-haiku-*200K8,192
claude-3-opus-*200K4,096
claude-opus-4-5-*200K32,000

OpenAI Models

ModelToolsSystem PromptVisionThinkingContext
gpt-4o*128K
gpt-4-turbo128K
o1200K
o1-mini128K
o3-mini200K

Gemini Models

ModelToolsVisionThinkingContext
gemini-2.0-flash-exp1M
gemini-2.0-flash-thinking1M
gemini-1.5-pro2M
gemini-1.5-flash1M

Runtime Capability Checks

Query model capabilities at runtime using the model registry:

typescript
import {
  getModelInfo,
  modelSupportsCapability,
  getModelsForProvider,
  getModelsWithCapability,
} from '@lov3kaizen/agentsea-core';

// Get full model info
const info = getModelInfo('claude-3-5-sonnet-20241022');
console.log(info);
// {
//   provider: 'anthropic',
//   model: 'claude-3-5-sonnet-20241022',
//   displayName: 'Claude 3.5 Sonnet',
//   capabilities: {
//     tools: true,
//     streaming: true,
//     vision: true,
//     structuredOutput: true,
//     systemMessage: true,
//     extendedThinking: true,
//     contextWindow: 200000,
//     maxOutputTokens: 8192,
//     parallelToolCalls: true,
//   }
// }

// Check specific capability
const supportsTools = modelSupportsCapability('o1-mini', 'tools'); // false
const supportsVision = modelSupportsCapability('gpt-4o', 'vision'); // true

// Find models by provider
const anthropicModels = getModelsForProvider('anthropic');

// Find models with specific capabilities
const visionModels = getModelsWithCapability('vision', true);
const thinkingModels = getModelsWithCapability('extendedThinking', true);

Type-Safe Provider Creation

typescript
import {
  createProvider,
  createAnthropicProvider,
  createOpenAIProvider,
  anthropic,
  openai,
} from '@lov3kaizen/agentsea-core';

// Generic factory (works with any config)
const provider1 = createProvider(anthropic('claude-3-5-sonnet-20241022', { ... }));
const provider2 = createProvider(openai('gpt-4o', { ... }));

// Provider-specific factories (for explicit typing)
const claudeProvider = createAnthropicProvider(
  anthropic('claude-3-5-sonnet-20241022', { tools: [myTool] }),
  { apiKey: process.env.ANTHROPIC_API_KEY }
);

// Access typed config and capabilities
console.log(claudeProvider.config.model); // 'claude-3-5-sonnet-20241022'
console.log(claudeProvider.supportsCapability('vision')); // true
console.log(claudeProvider.getModelInfo()?.capabilities.contextWindow); // 200000

Migration Guide

Before (No Type Safety)

typescript
// Runtime error: o1-mini doesn't support tools!
const agent = new Agent(
  {
    model: 'o1-mini',
    provider: 'openai',
    systemPrompt: 'Hello', // Also fails - no system prompt support
    tools: [myTool],
  },
  new OpenAIProvider(),
  toolRegistry
);

After (With Type Safety)

typescript
import { openai, createProvider } from '@lov3kaizen/agentsea-core';

// TypeScript catches all errors at compile time!
const config = openai('o1-mini', {
  // systemPrompt: 'Hello', // ❌ Compile error
  // tools: [myTool], // ❌ Compile error
  reasoningEffort: 'high', // ✅ Valid option
});

const provider = createProvider(config);

Key Benefits

Zero Runtime Overhead

All validation happens during TypeScript compilation. No performance impact at runtime.

💡

IDE Autocomplete

Only valid options appear in your IDE's autocomplete suggestions per model.

🔍

Self-Documenting

Model capabilities are explicit in type definitions. No guessing what's supported.

Next Steps

💡 Pro Tip

Use per-model type safety from the start of your project. It's much easier than debugging runtime errors later. The TypeScript compiler becomes your best friend when switching between models or providers!