SDK Quickstart
Get your first AI agent running in 5 minutes.
Prerequisites
- Sekuire CLI installed
- API key from your dashboard
- LLM API key (OpenAI, Anthropic, etc.)
Step 1: Create a Project
Use the CLI to scaffold a new agent:
Terminalbash
sekuire init --name my-agent --language python
cd my-agentThis creates a sekuire.yml config file and project structure.
Step 2: Configure Your Agent
Edit sekuire.yml to set your LLM provider:
sekuire.ymlyaml
project:
name: "my-agent"
version: "1.0.0"
agents:
assistant:
name: "AI Assistant"
system_prompt: "./prompts/assistant.md"
llm:
provider: "openai"
model: "gpt-4-turbo"
api_key_env: "OPENAI_API_KEY"
temperature: 0.7
memory:
type: "buffer"
max_messages: 10Set your API key in the environment:
bash
export OPENAI_API_KEY="sk-..."
# or add to .env fileStep 3: Install the SDK
typescript
npm install @sekuire/sdkStep 4: Use the Agent
typescript
import { getAgent } from '@sekuire/sdk';
async function main() {
// Load agent from sekuire.yml
const agent = await getAgent('assistant');
// Chat with the agent
const response = await agent.chat('What is the capital of France?');
console.log(response);
// Continue the conversation
const followUp = await agent.chat('What is its population?');
console.log(followUp);
// Get conversation history
const history = agent.getHistory();
console.log(`Messages: ${history.length}`);
}
main();Step 5: Run It
typescript
npx tsx main.ts
# or
node --loader ts-node/esm main.ts💡Use
sekuire run to run your agent with automatic bootstrap and heartbeat for production deployments.Next Steps
- Agent API — Deep dive into agent methods
- Built-in Tools — Add capabilities to your agent
- Streaming — Real-time token responses
- Publishing — Push to the registry