LLM Providers

Unified configuration for multiple LLM providers. Switch between OpenAI, Anthropic, Google, and Ollama without changing your code.

Supported Providers

ProviderModelsStreaming
OpenAIGPT-4, GPT-4 Turbo, GPT-3.5 Turbo, o1Yes
AnthropicClaude 3 Opus, Sonnet, HaikuYes
GoogleGemini Pro, Gemini 1.5 Pro/FlashYes
OllamaLlama 3, Mistral, Mixtral, customYes

Configuration

Configure providers in sekuire.yml:

sekuire.ymlyaml
agents: assistant: name: "AI Assistant" system_prompt: "./prompts/assistant.md" llm: provider: "openai" # openai | anthropic | google | ollama model: "gpt-4-turbo" api_key_env: "OPENAI_API_KEY" temperature: 0.7 max_tokens: 4096

OpenAI

sekuire.ymlyaml
llm: provider: "openai" model: "gpt-4-turbo" api_key_env: "OPENAI_API_KEY" organization_env: "OPENAI_ORG_ID" # optional temperature: 0.7 max_tokens: 4096 top_p: 1.0

Anthropic

sekuire.ymlyaml
llm: provider: "anthropic" model: "claude-3-opus-20240229" api_key_env: "ANTHROPIC_API_KEY" temperature: 0.7 max_tokens: 4096

Google

sekuire.ymlyaml
llm: provider: "google" model: "gemini-1.5-pro" api_key_env: "GOOGLE_API_KEY" temperature: 0.7 max_tokens: 8192

Ollama (Local)

sekuire.ymlyaml
llm: provider: "ollama" model: "llama3:70b" base_url: "http://localhost:11434" # default temperature: 0.7

Programmatic Configuration

typescript
import { getAgent, OpenAIProvider, AnthropicProvider } from '@sekuire/sdk'; // Override config provider at runtime const agent = await getAgent('assistant', { llm: new OpenAIProvider({ model: 'gpt-4-turbo', apiKey: process.env.OPENAI_API_KEY, temperature: 0.5 }) }); // Or switch providers dynamically const claudeAgent = await getAgent('assistant', { llm: new AnthropicProvider({ model: 'claude-3-opus-20240229', apiKey: process.env.ANTHROPIC_API_KEY }) });

Next Steps