Framework Integrations
Sekuire works as a governance layer for existing AI frameworks, not a replacement. You keep your framework of choice - Vercel AI SDK, LangChain, CrewAI, OpenAI Agents - and Sekuire enforces policies on every LLM call, tool invocation, and network request.
Supported Integrations
| Framework | Language | Integration Pattern | Guide |
|---|---|---|---|
| Vercel AI SDK | TypeScript | Custom LanguageModelV1 provider | View guide |
| LangChain | TypeScript, Python | Custom BaseChatModel subclass | View guide |
| CrewAI | Python | Custom LLM class passed to agents | View guide |
| OpenAI Agents SDK | TypeScript | Proxy-based OpenAI client | View guide |
How It Works
Sekuire intercepts at the LLM provider level. Your framework calls what it thinks is a normal LLM, but the request passes through Sekuire's policy enforcement before reaching the actual API.
┌──────────────────┐ ┌───────────────────────────┐ ┌──────────┐
│ Your Framework │────>│ Sekuire Provider Layer │────>│ LLM API │
│ (Vercel AI SDK, │ │ │ │ (OpenAI, │
│ LangChain, │ │ - Model allowlist check │ │ Anthropic│
│ CrewAI, etc.) │ │ - Tool blocking │ │ etc.) │
│ │<────│ - Rate limit enforcement │<────│ │
└──────────────────┘ │ - Network restrictions │ └──────────┘
│ - Filesystem restrictions │
└───────────────────────────┘
The framework itself has no idea governance is happening. Every generateText(), invoke(), crew.kickoff(), or run() call hits the policy layer transparently.
What Gets Enforced
| Governance Feature | Description |
|---|---|
| Model allowlists | Only approved models can be used. Requests to blocked models throw PolicyViolationError. |
| Tool blocking | Blocked tools are filtered at creation time and re-checked at execution time (defense in depth). |
| Rate limits | Per-agent request and token limits per minute. Exceeding the limit throws immediately. |
| Network restrictions | Allowed/blocked domain lists with optional TLS enforcement. |
| Filesystem restrictions | Allowed/blocked paths and file extensions. |
Prerequisites
All integrations require:
- Sekuire SDK installed -
@sekuire/sdk(TypeScript) orsekuire-sdk(Python) sekuire.ymlconfigured - Defines the agent, model, policy rules, and permissions- LLM API key - Set as an environment variable (e.g.,
OPENAI_API_KEY)
Choosing an Integration
- Next.js / React Server Components - Use the Vercel AI SDK integration
- LangChain agents or chains - Use the LangChain integration (available in both TypeScript and Python)
- Multi-agent crews - Use the CrewAI integration
- OpenAI Agents SDK - Use the OpenAI Agents integration
- Custom framework - Use the Sekuire SDK directly with
createLLMProvider()andPolicyEnforcer
Next Steps
- Policy Enforcement - Deep dive into
PolicyEnforcerand policy rules - Built-in Tools - Tool registry and built-in tool classes
- Configuration - Full
sekuire.ymlreference