CrewAI
Add Sekuire governance to CrewAI multi-agent workflows. Every LLM call from every agent in the crew goes through Sekuire policy enforcement. Tools are governed. Rate limits apply across the entire crew.
What You Get
- Every LLM call from every CrewAI agent is intercepted by Sekuire
- Tools are filtered by policy at creation time and re-checked at execution time
- Rate limits apply across the entire crew (shared enforcer)
- Model allowlists prevent agents from using unauthorized models
- Token usage is tracked and enforced post-call
The Problem
CrewAI uses litellm internally. When you pass llm="openai/gpt-4o-mini" to an agent, the call goes directly to the LLM API with no governance. There is no hook to intercept it.
The integration replaces this string-based LLM reference with a custom LLM class that routes through Sekuire:
# Without Sekuire - no governance
agent = Agent(role="Researcher", llm="openai/gpt-4o-mini")
# With Sekuire - every call is governed
llm = SekuireCrewLLM.from_config()
agent = Agent(role="Researcher", llm=llm)
Prerequisites
- Python 3.10+
sekuire-sdkandcrewaiinstalled- An LLM API key (e.g.,
OPENAI_API_KEY)
Installation
pip install sekuire-sdk crewai
Configure Governance
project:
name: crewai-integration
version: 1.0.0
agent:
name: Research Assistant
system_prompt: ./system_prompt.md
tools: ./tools.json
llm:
provider: openai
model: gpt-4o-mini
api_key_env: OPENAI_API_KEY
temperature: 0.7
max_tokens: 1024
models:
allowed_models:
- gpt-4o-mini
- gpt-4o
blocked_models:
- gpt-3.5-turbo
toolsets:
allowed_tools:
- name: web_search
- name: "files:read"
- name: calculator
blocked_tools:
- file_delete
- env_set
permissions:
network:
enabled: true
require_tls: true
allowed_domains:
- api.openai.com
- "*.wikipedia.org"
blocked_domains:
- "*.malware.net"
filesystem:
enabled: true
allowed_paths:
- "./data/*"
- "/tmp/*"
blocked_paths:
- "/etc/*"
- "~/.ssh/*"
rate_limits:
per_agent:
requests_per_minute: 10
You also need a policy.json for the Python SDK's PolicyEnforcer:
{
"policy_id": "crewai-policy",
"workspace_id": "local",
"version": "1",
"status": "active",
"hash": "0000000000000000000000000000000000000000000000000000000000000000",
"content": {
"models": {
"allowed_models": ["gpt-4o-mini", "gpt-4o"],
"blocked_models": ["gpt-3.5-turbo"]
},
"tools": {
"allowed_tools": [
{ "name": "web_search" },
{ "name": "files:read" },
{ "name": "calculator" }
],
"blocked_tools": ["file_delete", "env_set"]
},
"permissions": {
"network": {
"enabled": true,
"require_tls": true,
"allowed_domains": ["api.openai.com", "*.wikipedia.org"],
"blocked_domains": ["*.malware.net"]
},
"filesystem": {
"enabled": true,
"allowed_paths": ["./data/*", "/tmp/*"],
"blocked_paths": ["/etc/*", "~/.ssh/*"]
}
},
"rate_limits": {
"per_agent": {
"requests_per_minute": 10
}
}
}
}
Create the Governed LLM
SekuireCrewLLM is a callable class that CrewAI uses as its LLM. Every call() enforces model allowlists, rate limits, and tracks token usage.
import asyncio
import concurrent.futures
import json
import os
from typing import Dict, List, Optional
from sekuire_sdk import Message, PolicyEnforcer, PolicyViolation, load_config
from sekuire_sdk.config import get_agent_config
from sekuire_sdk.new_llm import LLMProvider, create_llm_provider
from sekuire_sdk.policy import ActivePolicy
def _run_sync(coro):
try:
loop = asyncio.get_running_loop()
except RuntimeError:
loop = None
if loop and loop.is_running():
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
return pool.submit(asyncio.run, coro).result()
else:
return asyncio.run(coro)
class SekuireCrewLLM:
def __init__(self, provider: LLMProvider, enforcer: PolicyEnforcer, model: str):
self._provider = provider
self._enforcer = enforcer
self.model = model
self._call_count = 0
@classmethod
def from_config(
cls,
config_path: str = "./sekuire.yml",
policy_path: Optional[str] = "./policy.json",
) -> "SekuireCrewLLM":
config = load_config(config_path)
agent_config = get_agent_config(config)
api_key = os.environ.get(agent_config.llm.api_key_env, "")
provider = _run_sync(create_llm_provider(
agent_config.llm.provider,
api_key,
agent_config.llm.model,
temperature=agent_config.llm.temperature,
max_tokens=agent_config.llm.max_tokens,
))
enforcer = _build_enforcer(policy_path)
return cls(provider, enforcer, agent_config.llm.model)
@property
def enforcer(self) -> PolicyEnforcer:
return self._enforcer
@property
def call_count(self) -> int:
return self._call_count
def call(self, messages: List[Dict[str, str]], **kwargs) -> str:
self._enforcer.enforce_model(self.model)
self._enforcer.enforce_rate_limit("request")
sekuire_messages = [
Message(role=m.get("role", "user"), content=m.get("content", ""))
for m in messages
]
response = _run_sync(self._provider.chat(sekuire_messages))
self._call_count += 1
if response.usage:
self._enforcer.enforce_rate_limit("token", response.usage.total_tokens)
return response.content
def __call__(self, messages: List[Dict[str, str]], **kwargs) -> str:
return self.call(messages, **kwargs)
Key details:
from_config()loads the provider and enforcer fromsekuire.ymlandpolicy.jsoncall()enforces model + rate limit before the LLM call, then tracks tokens after__call__()makes the class callable, which is what CrewAI expects_run_sync()bridges async Sekuire providers into CrewAI's synchronous execution model
Create Governed Tools
Tools from Sekuire's registry are wrapped as CrewAI BaseTool subclasses with policy enforcement:
from typing import List, Type
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
from sekuire_sdk import create_default_tool_registry, PolicyEnforcer, PolicyViolation
from sekuire_sdk.tools.base import Tool as SekuireTool
def create_governed_tools(enforcer: PolicyEnforcer) -> List[BaseTool]:
registry = create_default_tool_registry()
tools: List[BaseTool] = []
for tool in registry.list():
try:
enforcer.enforce_tool(tool.metadata.name)
except PolicyViolation:
continue
tools.append(_wrap_tool(tool, enforcer))
return tools
def _wrap_tool(tool: SekuireTool, enforcer: PolicyEnforcer) -> BaseTool:
input_model = _build_input_model(tool)
class GovernedTool(BaseTool):
name: str = tool.metadata.name
description: str = tool.metadata.description
args_schema: Type[BaseModel] = input_model
_sekuire_tool: SekuireTool = tool
_enforcer: PolicyEnforcer = enforcer
model_config = {"arbitrary_types_allowed": True}
def _run(self, **kwargs) -> str:
self._enforcer.enforce_tool(self.name)
self._enforcer.enforce_rate_limit("request")
self._sekuire_tool.validate(kwargs)
return _run_sync(self._sekuire_tool.execute(kwargs))
return GovernedTool()
Each tool enforces policy twice:
- At creation time - blocked tools are excluded from the list entirely
- At execution time - even if a tool slips through,
enforce_tool()blocks it
Build and Run a Governed Crew
from crewai import Agent, Task, Crew, Process
from sekuire_llm import SekuireCrewLLM
from sekuire_tools import create_governed_tools
llm = SekuireCrewLLM.from_config(
config_path="./sekuire.yml",
policy_path="./policy.json",
)
tools = create_governed_tools(llm.enforcer)
researcher = Agent(
role="Research Analyst",
goal="Find and analyze information about AI governance",
backstory="You are an expert research analyst specializing in AI policy.",
llm=llm,
tools=tools,
verbose=True,
)
writer = Agent(
role="Technical Writer",
goal="Write clear, concise summaries of research findings",
backstory="You are a skilled technical writer who distills complex topics.",
llm=llm,
verbose=True,
)
research_task = Task(
description="Research the current state of AI governance frameworks.",
expected_output="A structured list of major AI governance frameworks.",
agent=researcher,
)
writing_task = Task(
description="Write a one-paragraph summary of the research findings.",
expected_output="A concise paragraph summarizing AI governance frameworks.",
agent=writer,
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential,
verbose=True,
)
result = crew.kickoff()
print(f"Result: {result}")
print(f"Total LLM calls intercepted: {llm.call_count}")
Pass the SekuireCrewLLM instance as llm=llm, not a string like llm="openai/gpt-4o-mini". A string bypasses governance entirely.
Policy Enforcement During Crew Execution
When the crew runs, every LLM call from every agent hits the enforcer:
--- Model Policy Enforcement ---
gpt-4o-mini: ALLOWED
gpt-4o: ALLOWED
gpt-3.5-turbo: BLOCKED - Model gpt-3.5-turbo is not allowed
--- Tool Policy Enforcement ---
web_search: ALLOWED
calculator: ALLOWED
file_delete: BLOCKED - Tool file_delete is blocked by policy
env_set: BLOCKED - Tool env_set is blocked by policy
--- Rate Limit Enforcement ---
Request 1: ALLOWED
Request 2: ALLOWED
...
Request 10: ALLOWED
Request 11: BLOCKED - Rate limit exceeded: 10 requests per minute
If a policy violation occurs mid-crew, the crew stops with a PolicyViolation exception:
from sekuire_sdk import PolicyViolation
try:
result = crew.kickoff()
except PolicyViolation as err:
print(f"Crew stopped by policy: {err}")
Next Steps
- LangChain Integration - If you use LangChain instead
- OpenAI Agents Integration - Proxy-based governance for the Agents SDK
- Policy Enforcement - Full
PolicyEnforcerAPI reference - Example source