Sekuire SDKs
Build AI agents with the Sekuire Secure Layer using our official SDKs for TypeScript, Python, and Rust.
Installation
typescript
npm install @sekuire/sdk
# or
pnpm add @sekuire/sdkQuick Start
Create an agent with the CLI, then use the SDK to interact with it:
typescript
import { getAgent } from '@sekuire/sdk';
// Load agent from sekuire.yml
const agent = await getAgent('assistant');
// Chat with the agent
const response = await agent.chat('Hello!');
console.log(response);Features
| Feature | TypeScript | Python | Rust |
|---|---|---|---|
| LLM Providers | OpenAI, Anthropic, Google, Ollama | OpenAI, Anthropic, Google, Ollama | OpenAI, Anthropic, Google, Ollama |
| Config-First | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ |
| Built-in Tools | ✅ | ✅ | ✅ |
| Memory | Buffer, Window | Buffer, Window | Buffer, Window |
| Type Safety | TypeScript | Type Hints | Rust types |
Core Concepts
Config-First Approach
All SDKs read from sekuire.yml — the same config file used by the CLI. This ensures consistency between your development workflow and runtime.
LLM Providers
Switch providers by changing the config. All SDKs support:
- OpenAI — GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
- Anthropic — Claude 3 Opus, Sonnet, Haiku
- Google — Gemini Pro, Gemini 1.5
- Ollama — Local models (Llama, Mistral, etc.)
Async by Default
All SDKs are async-first for optimal performance with I/O-bound operations.
API Reference
Core API
| Command | Description |
|---|---|
| getAgent() | Load single agent from config |
| getAgents() | Load all agents from config |
| agent.chat() | Send message and get response |
| agent.chatStream() | Stream response tokens |
| Built-in Tools | Calculator, HTTP, Web Search |
Platform Features
| Command | Description |
|---|---|
| A2AClient / A2AServer | Agent-to-agent communication |
| createBeacon() | Heartbeat and kill switch |
| SekuireServer | HTTP server with trust endpoints |
| PolicyEnforcer | Runtime policy enforcement |
| createMemoryStorage() | Pluggable memory backends |
| initTelemetry() | OpenTelemetry integration |
| createWorker() | Background task processing |
| LLM Providers | OpenAI, Anthropic, Google, Ollama |
Next Steps
- Quickstart Guide — Full walkthrough
- Agent API — Creating and using agents
- Built-in Tools — Calculator, HTTP, File I/O
- Streaming — Real-time responses
- CLI — Scaffold projects