Skip to main content

Framework Integrations

Sekuire works as a governance layer for existing AI frameworks, not a replacement. You keep your framework of choice - Vercel AI SDK, LangChain, CrewAI, OpenAI Agents - and Sekuire enforces policies on every LLM call, tool invocation, and network request.


Supported Integrations

FrameworkLanguageIntegration PatternGuide
Vercel AI SDKTypeScriptCustom LanguageModelV1 providerView guide
LangChainTypeScript, PythonCustom BaseChatModel subclassView guide
CrewAIPythonCustom LLM class passed to agentsView guide
OpenAI Agents SDKTypeScriptProxy-based OpenAI clientView guide

How It Works

Sekuire intercepts at the LLM provider level. Your framework calls what it thinks is a normal LLM, but the request passes through Sekuire's policy enforcement before reaching the actual API.

┌──────────────────┐     ┌───────────────────────────┐     ┌──────────┐
│ Your Framework │────>│ Sekuire Provider Layer │────>│ LLM API │
│ (Vercel AI SDK, │ │ │ │ (OpenAI, │
│ LangChain, │ │ - Model allowlist check │ │ Anthropic│
│ CrewAI, etc.) │ │ - Tool blocking │ │ etc.) │
│ │<────│ - Rate limit enforcement │<────│ │
└──────────────────┘ │ - Network restrictions │ └──────────┘
│ - Filesystem restrictions │
└───────────────────────────┘

The framework itself has no idea governance is happening. Every generateText(), invoke(), crew.kickoff(), or run() call hits the policy layer transparently.


What Gets Enforced

Governance FeatureDescription
Model allowlistsOnly approved models can be used. Requests to blocked models throw PolicyViolationError.
Tool blockingBlocked tools are filtered at creation time and re-checked at execution time (defense in depth).
Rate limitsPer-agent request and token limits per minute. Exceeding the limit throws immediately.
Network restrictionsAllowed/blocked domain lists with optional TLS enforcement.
Filesystem restrictionsAllowed/blocked paths and file extensions.

Prerequisites

All integrations require:

  1. Sekuire SDK installed - @sekuire/sdk (TypeScript) or sekuire-sdk (Python)
  2. sekuire.yml configured - Defines the agent, model, policy rules, and permissions
  3. LLM API key - Set as an environment variable (e.g., OPENAI_API_KEY)

Choosing an Integration

  • Next.js / React Server Components - Use the Vercel AI SDK integration
  • LangChain agents or chains - Use the LangChain integration (available in both TypeScript and Python)
  • Multi-agent crews - Use the CrewAI integration
  • OpenAI Agents SDK - Use the OpenAI Agents integration
  • Custom framework - Use the Sekuire SDK directly with createLLMProvider() and PolicyEnforcer

Next Steps