Agent API
The SekuireAgent class is the core interface for interacting with LLMs.
Loading Agents
getAgent()
Load a single agent by name from sekuire.yml:
- TypeScript
- Python
- Rust
import { getAgent } from '@sekuire/sdk';
// Load agent by name
const agent = await getAgent('assistant');
// Load with custom config path
const agent = await getAgent('assistant', './config/sekuire.yml');
from sekuire_sdk import get_agent
# Load agent by name
agent = await get_agent('assistant')
# Load with custom config path
agent = await get_agent('assistant', './config/sekuire.yml')
use sekuire_sdk::get_agent;
// Load agent by name
let agent = get_agent(Some("assistant"), None).await?;
// Load with custom config path
let agent = get_agent(Some("assistant"), Some("./config/sekuire.yml")).await?;
getAgents()
Load all agents from the config:
- TypeScript
- Python
- Rust
import { getAgents } from '@sekuire/sdk';
const agents = await getAgents();
const researcher = agents['researcher'];
const coder = agents['coder'];
from sekuire_sdk import get_agents
agents = await get_agents()
researcher = agents['researcher']
coder = agents['coder']
use sekuire_sdk::get_agents;
let agents = get_agents(None).await?;
let researcher = agents.get("researcher").unwrap();
Chat Methods
chat()
Send a message and get a complete response:
- TypeScript
- Python
- Rust
const response = await agent.chat('Hello, how are you?');
console.log(response);
// With options
const response = await agent.chat('Explain quantum computing', {
temperature: 0.5,
maxTokens: 500,
});
response = await agent.chat('Hello, how are you?')
print(response)
# With options
response = await agent.chat('Explain quantum computing', {
'temperature': 0.5,
'max_tokens': 500,
})
let response = agent.chat("Hello, how are you?", None).await?;
println!("{}", response);
// With options
let mut options = ChatOptions::default();
options.temperature = Some(0.5);
options.max_tokens = Some(500);
let response = agent.chat("Explain quantum computing", Some(options)).await?;
chatStream()
Stream response tokens for real-time output. See Streaming for details.
Conversation History
- TypeScript
- Python
- Rust
// Get history
const history = agent.getHistory();
console.log(`Messages: ${history.length}`);
// Clear history
agent.clearHistory();
// History format
// [
// { role: 'user', content: 'Hello' },
// { role: 'assistant', content: 'Hi there!' },
// ]
# Get history
history = agent.get_history()
print(f"Messages: {len(history)}")
# Clear history
agent.clear_history()
# History format
# [
# {'role': 'user', 'content': 'Hello'},
# {'role': 'assistant', 'content': 'Hi there!'},
# ]
// Get history
let history = agent.get_history();
println!("Messages: {}", history.len());
// Clear history
agent.clear_history();
// History format: Vec<Message>
// Message { role: "user", content: "Hello" }
// Message { role: "assistant", content: "Hi there!" }
Agent Metadata
- TypeScript
- Python
- Rust
// Get provider info
const provider = agent.getLlmProvider(); // "openai"
const model = agent.getModelName(); // "gpt-4-turbo"
const tools = agent.getTools(); // ["calculator", "web_search"]
# Get provider info
provider = agent.get_llm_provider() # "openai"
model = agent.get_model_name() # "gpt-4-turbo"
tools = agent.get_tools() # ["calculator", "web_search"]
// Get provider info
let provider = agent.get_llm_provider(); // "openai"
let model = agent.get_model_name(); // "gpt-4-turbo"
let tools = agent.get_tools(); // ["calculator", "web_search"]
API Reference
| Method | Description |
|---|---|
chat(message, options?) | Send message, get response |
chatStream(message, options?) | Stream response tokens |
getHistory() | Get conversation history |
clearHistory() | Clear conversation |
getLlmProvider() | Get LLM provider name |
getModelName() | Get model name |
getTools() | Get registered tool names |
Next Steps
- Built-in Tools - Add capabilities
- Streaming - Real-time responses