Config Schema Reference
Complete schema for sekuire.yml configuration.
Full Schema
sekuire.ymlyaml
# Project metadata
project:
name: string # Required: Project name
version: string # Required: Semantic version (e.g., "1.0.0")
description: string # Optional: Project description
# Agent definitions
agents:
<agent_key>: # Agent identifier (used with getAgent())
name: string # Required: Display name
system_prompt: string # Required: Path to system prompt file
tools: string | array # Optional: Path to tools.json or inline list
llm:
provider: string # Required: openai, anthropic, google, ollama
model: string # Required: Model name (e.g., gpt-4-turbo)
api_key_env: string # Required: Env var for API key
temperature: number # Optional: 0-2 (default: 0.7)
max_tokens: number # Optional: Max response tokens
base_url: string # Optional: Custom API base URL
memory:
type: string # Optional: buffer, window (default: buffer)
max_messages: number # Optional: Max messages to retain
# Deployment config
deployment:
docker_image: string # Optional: Container image reference
runtime: string # Optional: node, python, rust
port: number # Optional: Container port (default: 8000)
# Marketplace metadata (for Agentpreneurs)
marketplace:
category: string # Optional: analytics, productivity, etc.
tags: array # Optional: Search tags
pricing:
model: string # Optional: subscription, usage, freemium
price_usd: number # Optional: Price in USD
billing: string # Optional: monthly, yearly, per-request
compliance: array # Optional: soc2, hipaa, gdprproject
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Project name (used in registry) |
version | string | Yes | Semantic version |
description | string | No | Project description |
agents.<key>
Each agent is defined under a unique key (e.g., assistant, researcher).
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Display name |
system_prompt | string | Yes | Path to .md file |
tools | string | array | No | Tools config |
llm
| Field | Type | Required | Description |
|---|---|---|---|
provider | string | Yes | openai, anthropic, google, ollama |
model | string | Yes | Model name |
api_key_env | string | Yes | Env var name for API key |
temperature | number | No | Sampling temperature (0-2) |
max_tokens | number | No | Max response tokens |
base_url | string | No | Custom API base URL |
memory
| Field | Type | Required | Description |
|---|---|---|---|
type | string | No | buffer, summary, window, redis, postgres, file |
max_messages | number | No | Max messages to retain (for buffer/summary/window) |
config | object | No | Backend-specific config (for redis/postgres/file) |
Memory Types
| Type | Description |
|---|---|
buffer | FIFO queue - oldest messages dropped first (default) |
summary | Summarize older messages to save context |
window | Sliding window of recent messages |
redis | Redis-backed persistent storage |
postgres | PostgreSQL-backed persistent storage |
file | File-based local storage |
Memory Examples
In-Memory (default)yaml
memory:
type: "buffer"
max_messages: 10Summary-basedyaml
memory:
type: "summary"
max_messages: 20Redis-backedyaml
memory:
type: "redis"
config:
url: "redis://localhost:6379" # Override via REDIS_URLPostgreSQL-backedyaml
memory:
type: "postgres"
config:
url: "postgres://user:pass@localhost/dbname" # Override via DATABASE_URLFile-basedyaml
memory:
type: "file"
config:
path: "./agent_memory.json"deployment
| Field | Type | Required | Description |
|---|---|---|---|
docker_image | string | No | Container image reference |
runtime | string | No | node, python, rust |
port | number | No | Container port (default: 8000) |
Supported LLM Providers
| Provider | Models | API Key Env |
|---|---|---|
| openai | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini | OPENAI_API_KEY |
| anthropic | claude-3-opus, claude-3-sonnet, claude-3-haiku | ANTHROPIC_API_KEY |
| gemini-pro, gemini-1.5-pro, gemini-2.0-flash | GOOGLE_API_KEY | |
| ollama | llama3, mistral, mixtral, deepseek (local) | N/A |
| custom | Any OpenAI-compatible API | Custom env var |
Custom LLM Providers
You can use any OpenAI-compatible API by setting provider: "custom" and specifying a base_url:
OpenRouteryaml
llm:
provider: "custom"
model: "anthropic/claude-3.5-sonnet"
api_key_env: "OPENROUTER_API_KEY"
base_url: "https://openrouter.ai/api/v1"Azure OpenAIyaml
llm:
provider: "custom"
model: "gpt-4"
api_key_env: "AZURE_OPENAI_API_KEY"
base_url: "https://your-resource.openai.azure.com/openai/deployments/gpt-4"Together AIyaml
llm:
provider: "custom"
model: "meta-llama/Llama-3-70b-chat-hf"
api_key_env: "TOGETHER_API_KEY"
base_url: "https://api.together.xyz/v1"Local vLLM / Text Generation Inferenceyaml
llm:
provider: "custom"
model: "mistral-7b"
api_key_env: "LOCAL_API_KEY" # or leave empty
base_url: "http://localhost:8080/v1"Complete Example
sekuire.ymlyaml
project:
name: "research-assistant"
version: "1.0.0"
description: "AI research assistant with web search"
agents:
researcher:
name: "Research Agent"
system_prompt: "./prompts/researcher.md"
tools:
- web_search
- calculator
llm:
provider: "anthropic"
model: "claude-3-opus-20240229"
api_key_env: "ANTHROPIC_API_KEY"
temperature: 0.3
max_tokens: 4000
memory:
type: "buffer"
max_messages: 20
coder:
name: "Coding Agent"
system_prompt: "./prompts/coder.md"
tools: "./coder-tools.json"
llm:
provider: "openai"
model: "gpt-4-turbo"
api_key_env: "OPENAI_API_KEY"
temperature: 0.1
deployment:
docker_image: "myrepo/research-assistant:v1.0.0"
runtime: "node"
port: 8000💡Use
sekuire init to generate a starter config with sensible defaults.