Config Schema Reference

Complete schema for sekuire.yml configuration.

Full Schema

sekuire.ymlyaml
# Project metadata project: name: string # Required: Project name version: string # Required: Semantic version (e.g., "1.0.0") description: string # Optional: Project description # Agent definitions agents: <agent_key>: # Agent identifier (used with getAgent()) name: string # Required: Display name system_prompt: string # Required: Path to system prompt file tools: string | array # Optional: Path to tools.json or inline list llm: provider: string # Required: openai, anthropic, google, ollama model: string # Required: Model name (e.g., gpt-4-turbo) api_key_env: string # Required: Env var for API key temperature: number # Optional: 0-2 (default: 0.7) max_tokens: number # Optional: Max response tokens base_url: string # Optional: Custom API base URL memory: type: string # Optional: buffer, window (default: buffer) max_messages: number # Optional: Max messages to retain # Deployment config deployment: docker_image: string # Optional: Container image reference runtime: string # Optional: node, python, rust port: number # Optional: Container port (default: 8000) # Marketplace metadata (for Agentpreneurs) marketplace: category: string # Optional: analytics, productivity, etc. tags: array # Optional: Search tags pricing: model: string # Optional: subscription, usage, freemium price_usd: number # Optional: Price in USD billing: string # Optional: monthly, yearly, per-request compliance: array # Optional: soc2, hipaa, gdpr

project

FieldTypeRequiredDescription
namestringYesProject name (used in registry)
versionstringYesSemantic version
descriptionstringNoProject description

agents.<key>

Each agent is defined under a unique key (e.g., assistant, researcher).

FieldTypeRequiredDescription
namestringYesDisplay name
system_promptstringYesPath to .md file
toolsstring | arrayNoTools config

llm

FieldTypeRequiredDescription
providerstringYesopenai, anthropic, google, ollama
modelstringYesModel name
api_key_envstringYesEnv var name for API key
temperaturenumberNoSampling temperature (0-2)
max_tokensnumberNoMax response tokens
base_urlstringNoCustom API base URL

memory

FieldTypeRequiredDescription
typestringNobuffer, summary, window, redis, postgres, file
max_messagesnumberNoMax messages to retain (for buffer/summary/window)
configobjectNoBackend-specific config (for redis/postgres/file)

Memory Types

TypeDescription
bufferFIFO queue - oldest messages dropped first (default)
summarySummarize older messages to save context
windowSliding window of recent messages
redisRedis-backed persistent storage
postgresPostgreSQL-backed persistent storage
fileFile-based local storage

Memory Examples

In-Memory (default)yaml
memory: type: "buffer" max_messages: 10
Summary-basedyaml
memory: type: "summary" max_messages: 20
Redis-backedyaml
memory: type: "redis" config: url: "redis://localhost:6379" # Override via REDIS_URL
PostgreSQL-backedyaml
memory: type: "postgres" config: url: "postgres://user:pass@localhost/dbname" # Override via DATABASE_URL
File-basedyaml
memory: type: "file" config: path: "./agent_memory.json"

deployment

FieldTypeRequiredDescription
docker_imagestringNoContainer image reference
runtimestringNonode, python, rust
portnumberNoContainer port (default: 8000)

Supported LLM Providers

ProviderModelsAPI Key Env
openaigpt-4, gpt-4-turbo, gpt-4o, gpt-4o-miniOPENAI_API_KEY
anthropicclaude-3-opus, claude-3-sonnet, claude-3-haikuANTHROPIC_API_KEY
googlegemini-pro, gemini-1.5-pro, gemini-2.0-flashGOOGLE_API_KEY
ollamallama3, mistral, mixtral, deepseek (local)N/A
customAny OpenAI-compatible APICustom env var

Custom LLM Providers

You can use any OpenAI-compatible API by setting provider: "custom" and specifying a base_url:

OpenRouteryaml
llm: provider: "custom" model: "anthropic/claude-3.5-sonnet" api_key_env: "OPENROUTER_API_KEY" base_url: "https://openrouter.ai/api/v1"
Azure OpenAIyaml
llm: provider: "custom" model: "gpt-4" api_key_env: "AZURE_OPENAI_API_KEY" base_url: "https://your-resource.openai.azure.com/openai/deployments/gpt-4"
Together AIyaml
llm: provider: "custom" model: "meta-llama/Llama-3-70b-chat-hf" api_key_env: "TOGETHER_API_KEY" base_url: "https://api.together.xyz/v1"
Local vLLM / Text Generation Inferenceyaml
llm: provider: "custom" model: "mistral-7b" api_key_env: "LOCAL_API_KEY" # or leave empty base_url: "http://localhost:8080/v1"

Complete Example

sekuire.ymlyaml
project: name: "research-assistant" version: "1.0.0" description: "AI research assistant with web search" agents: researcher: name: "Research Agent" system_prompt: "./prompts/researcher.md" tools: - web_search - calculator llm: provider: "anthropic" model: "claude-3-opus-20240229" api_key_env: "ANTHROPIC_API_KEY" temperature: 0.3 max_tokens: 4000 memory: type: "buffer" max_messages: 20 coder: name: "Coding Agent" system_prompt: "./prompts/coder.md" tools: "./coder-tools.json" llm: provider: "openai" model: "gpt-4-turbo" api_key_env: "OPENAI_API_KEY" temperature: 0.1 deployment: docker_image: "myrepo/research-assistant:v1.0.0" runtime: "node" port: 8000
💡Use sekuire init to generate a starter config with sensible defaults.