Architecture
ScalyClaw is built around three independent processes — Node, Worker, and Dashboard — connected entirely through Redis. There are no direct inter-process HTTP calls; every coordination happens through Redis data structures, BullMQ queues, and pub/sub channels. This design means each component can be scaled, restarted, or replaced independently without affecting the others.
Overview
Each process has a focused responsibility. The Node is the stateful brain: it owns channel connections, the orchestrator, and LLM calls. The Worker is the stateless execution engine: it picks up jobs from BullMQ queues and executes code, agents, and tools without holding any in-memory state. The Dashboard is a React 19 single-page application that reads and writes through a WebSocket-capable API, reflecting system state in real time.
| Component | Technology | Responsibility |
|---|---|---|
| Node stateful | Bun, TypeScript | Channel adapters, guards, orchestrator, system prompt assembly, LLM routing, queue producers |
| Worker stateless | Bun, TypeScript, BullMQ | BullMQ queue consumer for the scalyclaw-tools queue only — sandboxed code execution, skill invocations, and shell commands |
| Dashboard UI | React 19, Vite, WebSocket | 16-page admin SPA — config management, channel setup, mind editor, memory browser, skill/agent management, real-time logs |
All three components connect to the same Redis instance. Redis plays four distinct roles in the system:
- Message bus — BullMQ queues for all async work
- Config store — live system configuration at
scalyclaw:config - Secret vault — encrypted secrets at
scalyclaw:secret:* - Pub/sub — hot-reload signals for skills, agents, and config
ScalyClaw intentionally avoids config files on disk for runtime configuration. Everything the running system needs — API keys, model settings, channel tokens, feature flags — lives in Redis and can be changed live from the dashboard without a restart.
Component Diagram
Message Flow
Every inbound message follows a deterministic pipeline. The pipeline is designed so that each stage can short-circuit cleanly — a guard rejection stops processing early without leaking state.
Unified Conversation
ScalyClaw is a single-user assistant. There is one shared conversation history across all channels — stored in SQLite, not isolated per channel. When you send a message from Telegram, the full conversation history (including messages from Discord, Slack, and the web gateway) is loaded into context. Responses are routed back to the source channel only. The channelId is recorded as a source marker in each message, not as an isolation boundary.
Cancellation
Cancellation uses a simple Redis flag. When the user sends /stop, the Node sets scalyclaw:cancel in Redis with a 30-second TTL. The orchestrator checks this flag between LLM rounds — if set, it aborts the current processing, clears the flag, and returns immediately.
System Prompt Assembly
The orchestrator builds the system prompt fresh for every LLM call. It combines three sources:
- Disk files —
mind/IDENTITY.md,mind/SOUL.md,mind/USER.md(user-editable personality) - Code sections — three code-defined sections in
scalyclaw/src/prompt/:core-instructions,knowledge, andextensions - Dynamic data — current time, active channel, recent memories retrieved via semantic search, resolved secrets from vault, skill manifests, agent definitions
Tool Execution Routing
When the LLM emits a tool call, the unified tool router in tool-impl.ts decides where to execute it. Local tools run inline in the Node process. Heavy or sandboxed tools are dispatched to BullMQ queues and the Node awaits the result:
| Tool | Execution Target | Queue |
|---|---|---|
execute_code | Worker sandbox | scalyclaw-tools |
execute_skill | Worker skill runner | scalyclaw-tools |
execute_command | Worker shell executor | scalyclaw-tools |
delegate_agent | Node agent executor | scalyclaw-agents |
| Everything else | Node — inline (direct execution) | — (no queue) |
Queue System
ScalyClaw uses four BullMQ queues. The separate Worker process consumes only the scalyclaw-tools queue. The Node process runs its own BullMQ workers for the remaining three queues. Jobs are persistent — Redis stores them until acknowledged — so a restart never loses work in progress.
BullMQ forbids colons (:) in queue names because it uses colons internally as key separators in Redis. All ScalyClaw queue names use hyphens — e.g. scalyclaw-messages, not scalyclaw:messages.
| Queue name | Consumed by | Concurrency | Job types |
|---|---|---|---|
scalyclaw-messages |
Node (message-processor.ts) |
5 | message-processing, command |
scalyclaw-agents |
Node (agent-processor.ts) |
3 | agent-task |
scalyclaw-tools |
Worker (tool-processor.ts) |
— | tool-execution, skill-execution |
scalyclaw-internal |
Node (internal-processor.ts) |
3 | proactive-check, reminder, recurrent-reminder, task, recurrent-task, memory-extraction, vault-key-rotation |
Worker Scaling
Because the Worker is stateless and only consumes the scalyclaw-tools queue, you can run as many Worker processes as you need to scale tool throughput. Each Worker connects to the same Redis and competes with peers for jobs. BullMQ handles distributed locking internally — a job is processed by exactly one worker even when many are running. The three Node-internal queues (scalyclaw-messages, scalyclaw-agents, scalyclaw-internal) are always consumed by the Node process itself.
# Run two workers for higher tool throughput
scalyclaw worker start &
scalyclaw worker start &
Configuration
ScalyClaw stores all runtime configuration in Redis at the key scalyclaw:config as a JSON object. There are no config files on disk — the install is self-contained and portable. When you change a setting in the dashboard, it writes directly to Redis; all processes pick it up without a restart.
Config Structure
{
"orchestrator": {
"id": "default",
"maxIterations": 50,
"maxInputTokens": 150000,
"models": [{ "model": "claude-sonnet-4-20250514", "weight": 100, "priority": 1 }],
"skills": [],
"agents": []
},
"gateway": {
"host": "127.0.0.1",
"port": 3000,
"bind": "127.0.0.1",
"authType": "none",
"authValue": null,
"tls": { "cert": "", "key": "" },
"cors": []
},
"logs": { "level": ["all"], "format": "json", "type": "console" },
"memory": {
"topK": 10,
"scoreThreshold": 0.5,
"embeddingModel": "auto"
},
"queue": {
"lockDuration": 18300000,
"stalledInterval": 30000,
"limiter": { "max": 10, "duration": 1000 },
"removeOnComplete": { "age": 86400, "count": 1000 },
"removeOnFail": { "age": 604800 }
},
"models": {
"providers": {
"anthropic": { "apiKey": "sk-ant-..." },
"openai": { "apiKey": "sk-..." }
},
"models": [{
"id": "claude-sonnet",
"name": "claude-sonnet-4-20250514",
"provider": "anthropic",
"enabled": true,
"priority": 1,
"weight": 100,
"temperature": 0.7,
"maxTokens": 8192,
"contextWindow": 200000,
"toolEnabled": true,
"imageEnabled": true,
"audioEnabled": false,
"videoEnabled": false,
"documentEnabled": true,
"reasoningEnabled": false,
"inputPricePerMillion": 3,
"outputPricePerMillion": 15
}],
"embeddingModels": [{
"id": "text-embedding-3-small",
"name": "text-embedding-3-small",
"provider": "openai",
"enabled": true,
"priority": 1,
"weight": 100,
"dimensions": 1536,
"inputPricePerMillion": 0.02,
"outputPricePerMillion": 0
}]
},
"guards": {
"message": {
"enabled": true,
"model": "",
"echoGuard": { "enabled": false, "similarityThreshold": 0.7 },
"contentGuard": { "enabled": true }
},
"skill": { "enabled": true, "model": "" },
"agent": { "enabled": true, "model": "" },
"commandShield": { "enabled": true, "denied": ["rm -rf /", "mkfs.", "shutdown", "..."], "allowed": [] }
},
"budget": {
"monthlyLimit": 0,
"dailyLimit": 0,
"hardLimit": false,
"alertThresholds": [50, 80, 90]
},
"proactive": {
"enabled": true,
"model": "",
"cronPattern": "*/15 * * * *",
"idleThresholdMinutes": 120,
"cooldownSeconds": 14400,
"maxPerDay": 3,
"quietHours": {
"enabled": true,
"start": 22,
"end": 8,
"timezone": "UTC"
}
},
"channels": {},
"skills": [],
"mcpServers": {}
}
Hot Reload via Pub/Sub
When skills or agents change — whether edited in the dashboard or auto-created by the LLM — Redis pub/sub signals all running processes to reload their in-memory manifests without a restart. The Node subscribes to these channels at startup and refreshes its internal registry immediately on receipt.
| Pub/sub channel | Triggered by | Effect |
|---|---|---|
scalyclaw:skills:reload |
Dashboard skill editor, LLM skill creation tool | Node reloads all skill manifests from Redis; worker flushes its skill module cache |
scalyclaw:agents:reload |
Dashboard agent editor, LLM agent creation tool | Node reloads all agent definitions; next delegate_agent call uses updated config |
scalyclaw:config:reload |
Dashboard config editor | All processes re-read scalyclaw:config from Redis and apply the new settings without a restart |
// Node subscribes at startup const subscriber = redis.duplicate(); await subscriber.subscribe( "scalyclaw:skills:reload", "scalyclaw:agents:reload", "scalyclaw:config:reload", ); subscriber.on("message", async (channel) => { if (channel === "scalyclaw:skills:reload") { await reloadSkills(); } else if (channel === "scalyclaw:agents:reload") { await reloadAgents(); } else if (channel === "scalyclaw:config:reload") { await reloadConfig(); } });
Secrets
API keys and tokens are stored in the config object directly (e.g., providers.anthropic.apiKey). For additional secrets (channel tokens, MCP headers, etc.), use the vault at scalyclaw:secret:{name} in Redis. Secrets are managed via the dashboard Vault page and never written to disk.
You can inspect the live config at any time with redis-cli GET scalyclaw:config | jq. Changes written by the dashboard are immediately visible there.