Sekuire SDKs
Build AI agents with the Sekuire Secure Layer using our official SDKs for TypeScript, Python, and Rust.
tip
All SDKs use the same sekuire.yml configuration file format. Use the CLI to scaffold projects.
Installation
- TypeScript
- Python
- Rust
npm install @sekuire/sdk
# or
pnpm add @sekuire/sdk
pip install sekuire-sdk
# Add to Cargo.toml
[dependencies]
sekuire-sdk = "0.1"
tokio = { version = "1", features = ["full"] }
Quick Start
Create an agent with the CLI, then use the SDK to interact with it:
- TypeScript
- Python
- Rust
import { getAgent } from '@sekuire/sdk';
// Load agent from sekuire.yml
const agent = await getAgent('assistant');
// Chat with the agent
const response = await agent.chat('Hello!');
console.log(response);
import asyncio
from sekuire_sdk import get_agent
async def main():
# Load agent from sekuire.yml
agent = await get_agent('assistant')
# Chat with the agent
response = await agent.chat('Hello!')
print(response)
asyncio.run(main())
use sekuire_sdk::get_agent;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Load agent from sekuire.yml
let mut agent = get_agent(Some("assistant"), None).await?;
// Chat with the agent
let response = agent.chat("Hello!", None).await?;
println!("{}", response);
Ok(())
}
Features
| Feature | TypeScript | Python | Rust |
|---|---|---|---|
| LLM Providers | OpenAI, Anthropic, Google, Ollama | OpenAI, Anthropic, Google, Ollama | OpenAI, Anthropic, Google, Ollama |
| Config-First | Yes | Yes | Yes |
| Streaming | Yes | Yes | Yes |
| Built-in Tools | Yes | Yes | Yes |
| Memory | Buffer, Window | Buffer, Window | Buffer, Window |
| Type Safety | TypeScript | Type Hints | Rust types |
Core Concepts
Config-First Approach
All SDKs read from sekuire.yml - the same config file used by the CLI. This ensures consistency between your development workflow and runtime.
LLM Providers
Switch providers by changing the config. All SDKs support:
- OpenAI - GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
- Anthropic - Claude 3 Opus, Sonnet, Haiku
- Google - Gemini Pro, Gemini 1.5
- Ollama - Local models (Llama, Mistral, etc.)
Async by Default
All SDKs are async-first for optimal performance with I/O-bound operations.
API Reference
Core API
| Command | Description |
|---|---|
getAgent() | Load single agent from config |
getAgents() | Load all agents from config |
agent.chat() | Send message and get response |
agent.chatStream() | Stream response tokens |
| Built-in Tools | Calculator, HTTP, Web Search |
Platform Features
| Command | Description |
|---|---|
| A2AClient / A2AServer | Agent-to-agent communication |
createBeacon() | Heartbeat and kill switch |
| SekuireServer | HTTP server with trust endpoints |
| PolicyEnforcer | Runtime policy enforcement |
createMemoryStorage() | Pluggable memory backends |
initTelemetry() | OpenTelemetry integration |
createWorker() | Background task processing |
| LLM Providers | OpenAI, Anthropic, Google, Ollama |
Advanced Features
| Feature | Description |
|---|---|
| Multi-Agent Orchestration | Discover and delegate across agents |
| Agent Identity | Cryptographic signing and verification |
| Logging | Compliance event logging |
Reference
| Topic | Description |
|---|---|
| Tools Reference | All 50+ built-in tools |
| Environment Variables | Configuration reference |
| Authentication | Identity and trust |
| Error Handling | Error types and recovery |
Next Steps
- Quickstart Guide - Full walkthrough
- Agent API - Creating and using agents
- Built-in Tools - Calculator, HTTP, File I/O
- Tools Reference - Complete tool documentation
- Multi-Agent - Agent orchestration
- Streaming - Real-time responses
- CLI - Scaffold projects