Architecture

ScalyClaw is built around three independent processes — Node, Worker, and Dashboard — connected entirely through Redis. There are no direct inter-process HTTP calls; every coordination happens through Redis data structures, BullMQ queues, and pub/sub channels. This design means each component can be scaled, restarted, or replaced independently without affecting the others.

Overview

Each process has a focused responsibility. The Node is the stateful brain: it owns channel connections, the orchestrator, and LLM calls. The Worker is the stateless execution engine: it picks up jobs from BullMQ queues and executes code, agents, and tools without holding any in-memory state. The Dashboard is a React 19 single-page application that reads and writes through a WebSocket-capable API, reflecting system state in real time.

ComponentTechnologyResponsibility
Node stateful Bun, TypeScript Channel adapters, guards, orchestrator, system prompt assembly, LLM routing, queue producers
Worker stateless Bun, TypeScript, BullMQ BullMQ queue consumer for the scalyclaw-tools queue only — sandboxed code execution, skill invocations, and shell commands
Dashboard UI React 19, Vite, WebSocket 16-page admin SPA — config management, channel setup, mind editor, memory browser, skill/agent management, real-time logs

All three components connect to the same Redis instance. Redis plays four distinct roles in the system:

  • Message bus — BullMQ queues for all async work
  • Config store — live system configuration at scalyclaw:config
  • Secret vault — encrypted secrets at scalyclaw:secret:*
  • Pub/sub — hot-reload signals for skills, agents, and config
Design principle

ScalyClaw intentionally avoids config files on disk for runtime configuration. Everything the running system needs — API keys, model settings, channel tokens, feature flags — lives in Redis and can be changed live from the dashboard without a restart.

Component Diagram

Telegram Discord Slack Web ... NODE (channel adapters → orchestrator → LLM) ↕ Redis (BullMQ + pub/sub) WORKER (tools queue — code, skills, commands) ↕ Redis (config + WebSocket events) DASHBOARD (React 19 SPA · real-time WebSocket)

Message Flow

Every inbound message follows a deterministic pipeline. The pipeline is designed so that each stage can short-circuit cleanly — a guard rejection stops processing early without leaking state.

1. Channel receives message 2. BullMQ messages queue (concurrency=5) ↓ job picked up 3. Echo guard (reject messages from self) ↓ passed 4. Content guard (LLM-based policy check) ↓ approved 5. Orchestrator builds system prompt ↓ disk files + code sections + dynamic data 6. LLM call (streamed, with tool-use support) 7. Tool execution? (local / tools queue / agents queue) ↓ loop until done 8. Response sent back to source channel

Unified Conversation

ScalyClaw is a single-user assistant. There is one shared conversation history across all channels — stored in SQLite, not isolated per channel. When you send a message from Telegram, the full conversation history (including messages from Discord, Slack, and the web gateway) is loaded into context. Responses are routed back to the source channel only. The channelId is recorded as a source marker in each message, not as an isolation boundary.

Cancellation

Cancellation uses a simple Redis flag. When the user sends /stop, the Node sets scalyclaw:cancel in Redis with a 30-second TTL. The orchestrator checks this flag between LLM rounds — if set, it aborts the current processing, clears the flag, and returns immediately.

System Prompt Assembly

The orchestrator builds the system prompt fresh for every LLM call. It combines three sources:

  • Disk filesmind/IDENTITY.md, mind/SOUL.md, mind/USER.md (user-editable personality)
  • Code sections — three code-defined sections in scalyclaw/src/prompt/: core-instructions, knowledge, and extensions
  • Dynamic data — current time, active channel, recent memories retrieved via semantic search, resolved secrets from vault, skill manifests, agent definitions

Tool Execution Routing

When the LLM emits a tool call, the unified tool router in tool-impl.ts decides where to execute it. Local tools run inline in the Node process. Heavy or sandboxed tools are dispatched to BullMQ queues and the Node awaits the result:

ToolExecution TargetQueue
execute_codeWorker sandboxscalyclaw-tools
execute_skillWorker skill runnerscalyclaw-tools
execute_commandWorker shell executorscalyclaw-tools
delegate_agentNode agent executorscalyclaw-agents
Everything elseNode — inline (direct execution)— (no queue)

Queue System

ScalyClaw uses four BullMQ queues. The separate Worker process consumes only the scalyclaw-tools queue. The Node process runs its own BullMQ workers for the remaining three queues. Jobs are persistent — Redis stores them until acknowledged — so a restart never loses work in progress.

BullMQ queue naming

BullMQ forbids colons (:) in queue names because it uses colons internally as key separators in Redis. All ScalyClaw queue names use hyphens — e.g. scalyclaw-messages, not scalyclaw:messages.

Queue nameConsumed byConcurrencyJob types
scalyclaw-messages Node (message-processor.ts) 5 message-processing, command
scalyclaw-agents Node (agent-processor.ts) 3 agent-task
scalyclaw-tools Worker (tool-processor.ts) tool-execution, skill-execution
scalyclaw-internal Node (internal-processor.ts) 3 proactive-check, reminder, recurrent-reminder, task, recurrent-task, memory-extraction, vault-key-rotation

Worker Scaling

Because the Worker is stateless and only consumes the scalyclaw-tools queue, you can run as many Worker processes as you need to scale tool throughput. Each Worker connects to the same Redis and competes with peers for jobs. BullMQ handles distributed locking internally — a job is processed by exactly one worker even when many are running. The three Node-internal queues (scalyclaw-messages, scalyclaw-agents, scalyclaw-internal) are always consumed by the Node process itself.

bash
# Run two workers for higher tool throughput
scalyclaw worker start &
scalyclaw worker start &

Configuration

ScalyClaw stores all runtime configuration in Redis at the key scalyclaw:config as a JSON object. There are no config files on disk — the install is self-contained and portable. When you change a setting in the dashboard, it writes directly to Redis; all processes pick it up without a restart.

Config Structure

json
{
  "orchestrator": {
    "id": "default",
    "maxIterations": 50,
    "maxInputTokens": 150000,
    "models": [{ "model": "claude-sonnet-4-20250514", "weight": 100, "priority": 1 }],
    "skills": [],
    "agents": []
  },
  "gateway": {
    "host": "127.0.0.1",
    "port": 3000,
    "bind": "127.0.0.1",
    "authType": "none",
    "authValue": null,
    "tls": { "cert": "", "key": "" },
    "cors": []
  },
  "logs": { "level": ["all"], "format": "json", "type": "console" },
  "memory": {
    "topK": 10,
    "scoreThreshold": 0.5,
    "embeddingModel": "auto"
  },
  "queue": {
    "lockDuration": 18300000,
    "stalledInterval": 30000,
    "limiter": { "max": 10, "duration": 1000 },
    "removeOnComplete": { "age": 86400, "count": 1000 },
    "removeOnFail": { "age": 604800 }
  },
  "models": {
    "providers": {
      "anthropic": { "apiKey": "sk-ant-..." },
      "openai": { "apiKey": "sk-..." }
    },
    "models": [{
      "id": "claude-sonnet",
      "name": "claude-sonnet-4-20250514",
      "provider": "anthropic",
      "enabled": true,
      "priority": 1,
      "weight": 100,
      "temperature": 0.7,
      "maxTokens": 8192,
      "contextWindow": 200000,
      "toolEnabled": true,
      "imageEnabled": true,
      "audioEnabled": false,
      "videoEnabled": false,
      "documentEnabled": true,
      "reasoningEnabled": false,
      "inputPricePerMillion": 3,
      "outputPricePerMillion": 15
    }],
    "embeddingModels": [{
      "id": "text-embedding-3-small",
      "name": "text-embedding-3-small",
      "provider": "openai",
      "enabled": true,
      "priority": 1,
      "weight": 100,
      "dimensions": 1536,
      "inputPricePerMillion": 0.02,
      "outputPricePerMillion": 0
    }]
  },
  "guards": {
    "message": {
      "enabled": true,
      "model": "",
      "echoGuard": { "enabled": false, "similarityThreshold": 0.7 },
      "contentGuard": { "enabled": true }
    },
    "skill": { "enabled": true, "model": "" },
    "agent": { "enabled": true, "model": "" },
    "commandShield": { "enabled": true, "denied": ["rm -rf /", "mkfs.", "shutdown", "..."], "allowed": [] }
  },
  "budget": {
    "monthlyLimit": 0,
    "dailyLimit": 0,
    "hardLimit": false,
    "alertThresholds": [50, 80, 90]
  },
  "proactive": {
    "enabled": true,
    "model": "",
    "cronPattern": "*/15 * * * *",
    "idleThresholdMinutes": 120,
    "cooldownSeconds": 14400,
    "maxPerDay": 3,
    "quietHours": {
      "enabled": true,
      "start": 22,
      "end": 8,
      "timezone": "UTC"
    }
  },
  "channels": {},
  "skills": [],
  "mcpServers": {}
}

Hot Reload via Pub/Sub

When skills or agents change — whether edited in the dashboard or auto-created by the LLM — Redis pub/sub signals all running processes to reload their in-memory manifests without a restart. The Node subscribes to these channels at startup and refreshes its internal registry immediately on receipt.

Pub/sub channelTriggered byEffect
scalyclaw:skills:reload Dashboard skill editor, LLM skill creation tool Node reloads all skill manifests from Redis; worker flushes its skill module cache
scalyclaw:agents:reload Dashboard agent editor, LLM agent creation tool Node reloads all agent definitions; next delegate_agent call uses updated config
scalyclaw:config:reload Dashboard config editor All processes re-read scalyclaw:config from Redis and apply the new settings without a restart
typescript
// Node subscribes at startup
const subscriber = redis.duplicate();
await subscriber.subscribe(
  "scalyclaw:skills:reload",
  "scalyclaw:agents:reload",
  "scalyclaw:config:reload",
);

subscriber.on("message", async (channel) => {
  if (channel === "scalyclaw:skills:reload") {
    await reloadSkills();
  } else if (channel === "scalyclaw:agents:reload") {
    await reloadAgents();
  } else if (channel === "scalyclaw:config:reload") {
    await reloadConfig();
  }
});

Secrets

API keys and tokens are stored in the config object directly (e.g., providers.anthropic.apiKey). For additional secrets (channel tokens, MCP headers, etc.), use the vault at scalyclaw:secret:{name} in Redis. Secrets are managed via the dashboard Vault page and never written to disk.

Tip

You can inspect the live config at any time with redis-cli GET scalyclaw:config | jq. Changes written by the dashboard are immediately visible there.