Provider Reference
Complete guide to all 12+ LLM providers supported by AgentSea ADK. Mix and match providers for your needs.
🔌 12+ Providers Supported
Cloud Providers (6): Anthropic, OpenAI, Google, Azure OpenAI, Mistral AI, Cohere
Local Providers (6): Ollama, LM Studio, LocalAI, Text Generation WebUI, vLLM, Jan
Voice Providers (7): OpenAI Whisper, LemonFox STT, Local Whisper, OpenAI TTS, LemonFox TTS, ElevenLabs, Piper TTS
Provider Categories
Cloud Providers
Hosted APIs with best quality, easy to use, pay-per-token pricing.
Local Providers
Self-hosted models, complete privacy, no API costs, offline capability.
Voice Providers
Speech-to-Text and Text-to-Speech for voice-enabled agents.
Cloud Providers
Anthropic (Claude)
Leading AI safety company known for Claude models with strong reasoning and long context windows.
import { AnthropicProvider } from '@lov3kaizen/agentsea-core';
const provider = new AnthropicProvider(process.env.ANTHROPIC_API_KEY);
// Available models:
// - claude-sonnet-4-20250514 (Latest, most capable)
// - claude-3-5-sonnet-20241022
// - claude-3-opus-20240229 (Highest intelligence)
// - claude-3-haiku-20240307 (Fastest, cheapest)
const agent = new Agent(
{
model: 'claude-sonnet-4-20250514',
provider: 'anthropic',
temperature: 0.7,
maxTokens: 4096,
},
provider,
toolRegistry,
);Strengths:
- ✅ Excellent reasoning and analysis
- ✅ 200K context window
- ✅ Strong safety features
- ✅ Tool use (function calling)
- ✅ Vision capabilities
Best For:
- Complex reasoning tasks
- Code analysis
- Long document processing
- Safety-critical applications
OpenAI (GPT)
Pioneer in large language models, known for GPT-4 and strong ecosystem.
import { OpenAIProvider } from '@lov3kaizen/agentsea-core';
const provider = new OpenAIProvider(process.env.OPENAI_API_KEY);
// Available models:
// - gpt-4-turbo-preview (Most capable)
// - gpt-4 (Reliable, proven)
// - gpt-3.5-turbo (Fast, cost-effective)
// - gpt-4-vision-preview (Vision support)
const agent = new Agent(
{
model: 'gpt-4-turbo-preview',
provider: 'openai',
temperature: 0.8,
maxTokens: 2048,
},
provider,
toolRegistry,
);Strengths:
- ✅ Broad knowledge base
- ✅ Strong creative writing
- ✅ Function calling
- ✅ Large ecosystem
- ✅ Reliable performance
Best For:
- General-purpose tasks
- Creative content
- Customer service
- Content generation
Google (Gemini)
Google's multimodal AI models with strong reasoning and native tool integration.
import { GoogleProvider } from '@lov3kaizen/agentsea-core';
const provider = new GoogleProvider(process.env.GOOGLE_AI_API_KEY);
// Available models:
// - gemini-2.0-flash-exp (Latest, fastest)
// - gemini-1.5-pro (Most capable)
// - gemini-1.5-flash (Fast, affordable)
const agent = new Agent(
{
model: 'gemini-2.0-flash-exp',
provider: 'google',
temperature: 0.7,
},
provider,
toolRegistry,
);Strengths:
- ✅ Multimodal (text, image, video, audio)
- ✅ 1M+ context window
- ✅ Fast inference
- ✅ Native function calling
- ✅ Code execution
Best For:
- Multimodal applications
- Long context tasks
- Video/audio analysis
- Scientific research
Other Cloud Providers
Azure OpenAI
Enterprise-grade OpenAI models with Azure infrastructure and compliance.
AzureOpenAIProviderMistral AI
European AI company with strong open models and competitive pricing.
MistralAIProviderCohere
Enterprise-focused AI with strong retrieval and generation capabilities.
CohereProviderLocal Providers
🔒 Why Local?
✅ Privacy: Data never leaves your infrastructure
✅ Cost: Zero API costs, unlimited usage
✅ Control: Full control over models and versions
✅ Offline: Works without internet connection
Ollama (Recommended)
The easiest way to run models locally. Simple CLI, automatic GPU support, growing model library.
import { OllamaProvider } from '@lov3kaizen/agentsea-core';
const provider = new OllamaProvider({
baseUrl: 'http://localhost:11434',
});
// Pull models
await provider.pullModel('llama3.2');
await provider.pullModel('mistral');
// List available models
const models = await provider.listModels();
const agent = new Agent(
{
model: 'llama3.2',
provider: 'ollama',
},
provider,
toolRegistry,
);Popular Models: llama3.2 (8B), mistral (7B), qwen2.5 (7B), gemma2 (9B), codellama (7B), phi3 (3.8B)
LM Studio
Desktop app with beautiful UI. Download models with one click, built-in OpenAI-compatible server.
import { LMStudioProvider } from '@lov3kaizen/agentsea-core';
const provider = new LMStudioProvider({
baseUrl: 'http://localhost:1234',
});
const agent = new Agent(
{
model: 'local-model', // Whatever you loaded in LM Studio
provider: 'lm-studio',
},
provider,
toolRegistry,
);LocalAI
Self-hosted OpenAI alternative. Supports LLMs, Stable Diffusion, voice, embeddings, more.
import { LocalAIProvider } from '@lov3kaizen/agentsea-core';
const provider = new LocalAIProvider({
baseUrl: 'http://localhost:8080',
});
const agent = new Agent(
{
model: 'llama-3.2-3b',
provider: 'localai',
},
provider,
toolRegistry,
);Text Generation WebUI
Feature-rich web interface for running models. Extensions, characters, multiple backends.
import { TextGenerationWebUIProvider } from '@lov3kaizen/agentsea-core';
const provider = new TextGenerationWebUIProvider({
baseUrl: 'http://localhost:5000',
});vLLM
High-throughput inference engine for production. Uses PagedAttention for efficiency.
import { VLLMProvider } from '@lov3kaizen/agentsea-core';
const provider = new VLLMProvider({
baseUrl: 'http://localhost:8000',
});
// Best for production with high request volumeJan
Open source ChatGPT alternative. Desktop app with local execution.
import { OpenAICompatibleProvider } from '@lov3kaizen/agentsea-core';
// Jan uses OpenAI-compatible API
const provider = new OpenAICompatibleProvider({
baseUrl: 'http://localhost:1337',
});Provider Comparison
Cloud Providers
| Provider | Quality | Speed | Cost | Context | Best For |
|---|---|---|---|---|---|
| Anthropic | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | $$$ | 200K | Reasoning, safety |
| OpenAI | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | $$$ | 128K | General purpose |
| ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | $$ | 1M+ | Multimodal, long context | |
| Mistral AI | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | $$ | 32K | European compliance |
| Cohere | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | $$ | 128K | Enterprise RAG |
Local Providers
| Provider | Ease of Use | Performance | Features | Best For |
|---|---|---|---|---|
| Ollama | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Model mgmt, CLI | Getting started |
| LM Studio | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | GUI, easy setup | Non-technical users |
| LocalAI | ⭐⭐⭐ | ⭐⭐⭐⭐ | Multi-modal, Docker | Self-hosted services |
| vLLM | ⭐⭐ | ⭐⭐⭐⭐⭐ | PagedAttention | Production scale |
| Text Gen WebUI | ⭐⭐⭐⭐ | ⭐⭐⭐ | Web UI, extensions | Experimentation |
| Jan | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | Desktop app | ChatGPT alternative |
Choosing a Provider
🎯 For Getting Started
Cloud: Start with Anthropic Claude or OpenAI GPT - easy setup, excellent quality
Local: Ollama with llama3.2 - simplest local setup
💰 For Cost Savings
Development: Ollama (free, unlimited)
Production: Google Gemini Flash (lowest cost per token) or vLLM (self-hosted)
🔒 For Privacy
Complete Privacy: Ollama + Local Whisper + Piper TTS
Production Scale: vLLM with self-hosted models
⚡ For Performance
Speed: Google Gemini Flash or Claude Haiku
Quality: Claude Sonnet 4 or GPT-4 Turbo
Throughput: vLLM for production scale
Multi-Provider Setup
Use different providers for different tasks:
import {
Agent,
AnthropicProvider,
OllamaProvider,
ToolRegistry,
} from '@lov3kaizen/agentsea-core';
// Cloud provider for complex tasks
const claudeProvider = new AnthropicProvider(process.env.ANTHROPIC_API_KEY);
// Local provider for simple tasks
const ollamaProvider = new OllamaProvider();
const toolRegistry = new ToolRegistry();
// Complex reasoning agent (cloud)
const researchAgent = new Agent(
{
name: 'researcher',
model: 'claude-sonnet-4-20250514',
provider: 'anthropic',
systemPrompt: 'You are a research assistant.',
},
claudeProvider,
toolRegistry,
);
// Simple task agent (local, free)
const helperAgent = new Agent(
{
name: 'helper',
model: 'llama3.2',
provider: 'ollama',
systemPrompt: 'You are a helpful assistant.',
},
ollamaProvider,
toolRegistry,
);
// Use the right agent for each task
const complexResult = await researchAgent.execute('Analyze this...', context);
const simpleResult = await helperAgent.execute('What is 2+2?', context);Next Steps
- Voice Providers - STT and TTS providers
- Local Models Guide - Deep dive into local execution
- CLI Tool - Manage providers with CLI
- Agent Configuration - Configure agents with providers
- View Examples - Provider usage examples
💡 Pro Tip
Start with a cloud provider to validate your idea, then migrate to local providers for cost savings at scale. Use the same AgentSea ADK code - just swap the provider! Many successful products save $75K+ annually by running production workloads on self-hosted models.