LLM Providers
Unified configuration for multiple LLM providers. Switch between OpenAI, Anthropic, Google, and Ollama without changing your code.
Supported Providers
| Provider | Models | Streaming |
|---|---|---|
| OpenAI | GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, o1 | Yes |
| Anthropic | Claude 3 Opus, Sonnet, Haiku | Yes |
| Gemini Pro, Gemini 1.5 Pro/Flash | Yes | |
| Ollama | Llama 3, Mistral, Mixtral, custom | Yes |
Configuration
Configure providers in sekuire.yml:
sekuire.ymlyaml
agents:
assistant:
name: "AI Assistant"
system_prompt: "./prompts/assistant.md"
llm:
provider: "openai" # openai | anthropic | google | ollama
model: "gpt-4-turbo"
api_key_env: "OPENAI_API_KEY"
temperature: 0.7
max_tokens: 4096OpenAI
sekuire.ymlyaml
llm:
provider: "openai"
model: "gpt-4-turbo"
api_key_env: "OPENAI_API_KEY"
organization_env: "OPENAI_ORG_ID" # optional
temperature: 0.7
max_tokens: 4096
top_p: 1.0Anthropic
sekuire.ymlyaml
llm:
provider: "anthropic"
model: "claude-3-opus-20240229"
api_key_env: "ANTHROPIC_API_KEY"
temperature: 0.7
max_tokens: 4096sekuire.ymlyaml
llm:
provider: "google"
model: "gemini-1.5-pro"
api_key_env: "GOOGLE_API_KEY"
temperature: 0.7
max_tokens: 8192Ollama (Local)
sekuire.ymlyaml
llm:
provider: "ollama"
model: "llama3:70b"
base_url: "http://localhost:11434" # default
temperature: 0.7Programmatic Configuration
typescript
import { getAgent, OpenAIProvider, AnthropicProvider } from '@sekuire/sdk';
// Override config provider at runtime
const agent = await getAgent('assistant', {
llm: new OpenAIProvider({
model: 'gpt-4-turbo',
apiKey: process.env.OPENAI_API_KEY,
temperature: 0.5
})
});
// Or switch providers dynamically
const claudeAgent = await getAgent('assistant', {
llm: new AnthropicProvider({
model: 'claude-3-opus-20240229',
apiKey: process.env.ANTHROPIC_API_KEY
})
});Next Steps
- Agent API - Using agents with providers
- Streaming - Stream responses from providers
- Config Schema - Full configuration reference