Agents
Agents are autonomous sub-personalities that ScalyClaw can delegate complex tasks to. Each agent has its own system prompt, model selection, and tool permissions — letting you compose specialised expertise without burdening the main orchestrator with every capability at once.
Agent System
The main orchestrator handles your day-to-day messages, maintains conversation state, and decides when a task is better handled by a specialist. Agents fill that specialist role: they run independently, can use a different model, and operate under their own permission set.
When to use agents
- Long-running research — tasks that require many tool calls or deep iteration that would clutter the main conversation thread
- Specialised domains — coding, writing, data analysis, or any area where a focused system prompt and a purpose-fit model outperform the general orchestrator
- Different model capabilities — run a cheap, fast model for classification sub-tasks while the main orchestrator uses a more capable model for reasoning
- Parallel workloads — the agents queue processes jobs concurrently, so multiple agents can work simultaneously on independent sub-problems
Agents run via the BullMQ agents queue. They do not share the main orchestrator's conversation loop — each delegation is an isolated job that returns a single result.
| Property | Main Orchestrator | Agent |
|---|---|---|
| Scope | Full conversation, session state, channel I/O | Single delegated task, no channel access |
| Tools | All tools enabled by default | Explicit tools list per agent config |
| Model | Configured globally in scalyclaw:config |
Overridable per agent |
| System prompt | Built from mind/ files + code sections |
Agent-specific prompt defined at creation |
| Max iterations | Governed by session timeout | Configurable maxIterations + timeout |
| Memory access | Full read and write | Full read and write (shared memory system) |
Creating Agents
Each agent lives in its own directory under ~/.scalyclaw/agents/{agent-id}/. The directory contains an AGENT.md file with YAML frontmatter for identity and a body for the system prompt. Configuration (model selection, tools, skills, etc.) is stored separately in Redis under scalyclaw:config inside the orchestrator.agents array. Agents are hot-reloaded via the scalyclaw:agents:reload pub/sub channel — no restart required.
AGENT.md structure
The AGENT.md file defines the agent's identity and system prompt:
| Field | Location | Description |
|---|---|---|
name |
YAML frontmatter | Human-readable display name for the agent |
description |
YAML frontmatter | Shown to the orchestrator so it knows when to delegate to this agent |
| (body) | Markdown body | The agent's complete system prompt — full persona and instructions |
Config properties
Each entry in config.orchestrator.agents[] controls runtime behaviour:
| Property | Type | Description |
|---|---|---|
id |
string | Must match the agent's directory name; used when calling delegate_agent |
enabled |
boolean | Whether the agent is available for delegation |
maxIterations |
number | Maximum tool-call loops before the agent is forced to return a result (default 25) |
models |
{ model, weight, priority }[] |
Ordered list of models with weights for load balancing |
skills |
string[] | Skill IDs the agent may invoke |
tools |
string[] | Tool names the agent may call (see Tool Access below) |
mcpServers |
string[] | MCP server IDs available to the agent |
Example files
# ~/.scalyclaw/agents/researcher/AGENT.md
---
name: Researcher
description: Deep research agent. Use for tasks requiring web search, source synthesis, and long-form analysis.
---
You are a meticulous research assistant. Search thoroughly, cite sources,
and produce structured summaries. Never guess — if you cannot find a
reliable source, say so.
// Entry in config.orchestrator.agents[] { "id": "researcher", "enabled": true, "maxIterations": 20, "models": [ { "model": "claude-opus-4-6", "weight": 1, "priority": 1 } ], "skills": [], "tools": ["web_search", "fetch_url", "memory_store", "memory_search"], "mcpServers": [] }
Tool access
Agents have a restricted tool surface compared to the main orchestrator. The tools an agent can use fall into two categories depending on how they are executed:
| Category | Tools | Notes |
|---|---|---|
| Direct (run locally) | send_message, send_file, all 5 memory tools, vault_check, vault_list, all file I/O tools |
Executed inline inside the agent worker |
| Job (queued) | execute_command, execute_skill, execute_code |
Routed to the tools BullMQ queue |
| Not allowed | delegate_agent, schedule_* |
Agents cannot self-delegate or schedule jobs |
Built-in agents
ScalyClaw ships with one built-in agent out of the box:
| ID | Purpose |
|---|---|
skill-creator-agent |
Creates new skills automatically when the orchestrator determines a reusable skill is needed |
Hot-reload
Every time you save an agent's config or AGENT.md, ScalyClaw publishes to the scalyclaw:agents:reload Redis pub/sub channel. All running workers subscribe to this channel and refresh their in-memory agent registry immediately — no process restart needed.
# Manually trigger an agent reload from the CLI redis-cli PUBLISH scalyclaw:agents:reload "{}"
Execution
Delegation follows a straightforward pipeline. The orchestrator decides a task is better handled by a specialist, calls the delegate_agent tool, and waits for the result. The agent runs in isolation and returns a single text result that the orchestrator incorporates into its response.
Delegation flow
// 1. Orchestrator calls the delegate_agent tool const result = await delegate_agent({ agent: "researcher", task: "Summarise the latest developments in quantum error correction, citing primary sources published after 2023.", }); // 2. tool-impl.ts routes delegate_agent to the agents BullMQ queue await agentsQueue.add("delegate", { agentName: "researcher", task: result.task, channelId: ctx.channelId, }); // 3. Agent worker picks up the job, runs the agent's own prompt + model // The agent may call its configured tools in a loop up to maxIterations // 4. Worker resolves the job with the agent's final text result // 5. Orchestrator receives the result and incorporates it into the response
Step-by-step
- The main orchestrator receives a user message and decides — based on the agent's
description— that a specialist is the right tool for the job. - It calls
delegate_agentwith the agent name and a task description string. tool-impl.tsintercepts the call and adds a job to the BullMQ agents queue instead of running it locally.- An agent worker (concurrency 3) picks up the job. It loads the agent's runtime config from Redis and reads the agent's
AGENT.mdfrom disk to build the system prompt, selects the agent's model, and starts an isolated LLM loop. - The agent calls its permitted tools as many times as needed, up to
maxIterations. - When the agent produces a final answer, the worker resolves the BullMQ job with the result text.
- The orchestrator receives the result as the return value of the
delegate_agenttool call and incorporates it into the final response sent to the user.
Agents share the same memory system as the main orchestrator. They can call memory_store to store findings and memory_search to retrieve past context — the same SQLite + sqlite-vec store used everywhere. Facts an agent discovers are immediately available to the orchestrator and to future agents.
Example: research delegation
// What the orchestrator sends to the agents queue { "agentName": "researcher", "task": "Find the current price of Brent crude and summarise the three most recent analyst forecasts.", "channelId": "telegram:123456789" } // What the agent returns after running web_search + fetch_url in a loop { "result": "Brent crude is trading at $82.40/bbl as of 2026-02-24. Goldman Sachs forecasts $85 by Q3 citing supply tightening; JPMorgan holds $78 on demand softness; BofA sees $80 as the 12-month base case.", "iterations": 4, "durationMs": 9420 }
The task string you pass to delegate_agent is the only instruction the agent receives beyond its own system prompt. Make it specific: include scope, desired format, and any constraints. Vague tasks produce vague results, and the orchestrator cannot course-correct mid-run.