Skills
Skills are self-contained code packages that extend ScalyClaw's capabilities. Think of them as plugins the AI can invoke — each skill encapsulates a discrete capability, receives parameters via stdin JSON, and runs in an isolated worker sandbox. The AI calls skills through the execute_skill tool, waits for the result, and incorporates it into its response. You can write skills in JavaScript, Python, Rust, or Bash; package them in a folder; and drop them into your installation without touching any ScalyClaw source code.
Skill Basics
Every skill is a folder containing exactly two required components: a SKILL.md manifest that describes the skill to the AI, and an entry point file that contains the executable code. The manifest is the contract — it defines the skill's name, description, parameters, and language. The AI reads the manifest to understand what the skill does and how to call it; the worker reads it to know how to execute the entry point.
SKILL.md Manifest Format
The manifest is a markdown file with a structured YAML front-matter block followed by a freeform description section. The front-matter is machine-readable; the description section gives the AI richer context about when and how to use the skill.
--- name: weather-lookup description: Fetches current weather conditions and forecast for a location. script: index.js language: javascript install: bun install --- ## When to use Use this skill whenever the user asks about current weather, temperature, rain, wind, or a forecast for a specific location. Do not use it for historical weather data — it only covers the current 7-day window. ## Input Receives a JSON object via stdin with fields: `location` (string, required — city name or "lat,lon" coordinates) and `units` (string, optional — "metric" or "imperial", defaults to "metric"). ## Output Returns a JSON object with `current` (temperature, condition, wind) and `forecast` (array of daily summaries for the next 7 days).
Manifest Fields
| Field | Type | Required | Description |
|---|---|---|---|
name |
string | Yes | Unique identifier for the skill. Use kebab-case. The AI references this name when calling execute_skill. Must be unique across all installed skills. |
description |
string | Yes | One-sentence description of what the skill does. This is injected into the system prompt so the AI knows the skill exists and what it is for. Keep it tight — one clear sentence is better than a paragraph. |
script |
string | Yes | Path to the entry point file, relative to the skill folder. Conventionally index.js, main.py, main.rs, or run.sh. |
language |
string | Yes | Runtime to use. One of javascript, python, rust, or bash. |
install |
string | No | Install command to run before the first execution. Auto-detected from package.json (bun install), pyproject.toml (uv sync), requirements.txt (uv pip install -r requirements.txt), or Cargo.toml (cargo build --release). Use install: none to skip. |
Supported Languages
| Language | language value | Runtime | Dependency file |
|---|---|---|---|
| JavaScript | javascript |
bun run | package.json → bun install |
| Python | python |
uv run | pyproject.toml → uv sync / requirements.txt → uv pip install -r requirements.txt |
| Rust | rust |
Cargo (compiled before first run) | Cargo.toml → cargo build --release |
| Bash | bash |
bash | — (none) |
When ScalyClaw invokes your skill, it writes the full parameter object as a single JSON object to the process's stdin. Read and parse stdin at the start of your script to access all parameters. Secrets are injected separately as environment variables with the prefix SKILL_SECRET_ (e.g. a secret named api_key becomes SKILL_SECRET_API_KEY). The workspace path is available as WORKSPACE_DIR.
Creating a Skill
Creating a skill is straightforward: make a folder, write a SKILL.md, write your entry point, and drop the folder into ~/.scalyclaw/skills/. ScalyClaw detects the new folder via the hot-reload mechanism and makes the skill available immediately.
JavaScript Skill — Weather Lookup
This skill fetches current weather data from a public API and returns a structured JSON result. It demonstrates reading parameters from stdin JSON, secret injection via environment variables, and structured stdout output.
Folder structure:
SKILL.md
--- name: weather-lookup description: Fetches current weather conditions and a 7-day forecast for any location. script: index.js language: javascript install: bun install --- ## When to use Use for any question about current or upcoming weather at a specific place. Not suitable for historical weather data. ## Input Receives a JSON object via stdin: - `location` (string, required): City name (e.g. "London") or "lat,lon" coordinates. - `units` (string, optional): "metric" (Celsius/km·h) or "imperial" (Fahrenheit/mph). Defaults to "metric". - Requires vault secret `openweather_api_key` (injected as SKILL_SECRET_OPENWEATHER_API_KEY).
index.js
// Parameters delivered as JSON via stdin const params = JSON.parse(await new Response(process.stdin).text()); const location = params.location; const units = params.units ?? "metric"; // Secrets injected from vault as SKILL_SECRET_{NAME_UPPER} const apiKey = process.env.SKILL_SECRET_OPENWEATHER_API_KEY; if (!location) { console.error("Missing required parameter: location"); process.exit(1); } async function run() { // Geocode if location is not already coordinates const coords = await resolveCoords(location, apiKey); const url = `https://api.openweathermap.org/data/3.0/onecall?` + `lat=${coords.lat}&lon=${coords.lon}` + `&units=${units}&exclude=minutely,hourly,alerts` + `&appid=${apiKey}`; const res = await fetch(url); if (!res.ok) throw new Error(`API error: ${res.status}`); const data = await res.json(); const result = { location: coords.name, current: { temp: data.current.temp, feels_like: data.current.feels_like, humidity: data.current.humidity, wind_speed: data.current.wind_speed, condition: data.current.weather[0].description, }, forecast: data.daily.slice(0, 7).map(d => ({ date: new Date(d.dt * 1000).toISOString().slice(0, 10), high: d.temp.max, low: d.temp.min, condition: d.weather[0].description, rain_mm: d.rain ?? 0, })), }; // Write JSON to stdout — ScalyClaw captures this as the skill result console.log(JSON.stringify(result, null, 2)); } async function resolveCoords(loc, key) { if (/^-?\d+\.?\d*,-?\d+\.?\d*$/.test(loc)) { const [lat, lon] = loc.split(",").map(Number); return { lat, lon, name: loc }; } const geoUrl = `https://api.openweathermap.org/geo/1.0/direct?q=${encodeURIComponent(loc)}&limit=1&appid=${key}`; const geo = await (await fetch(geoUrl)).json(); if (!geo.length) throw new Error(`Location not found: ${loc}`); return { lat: geo[0].lat, lon: geo[0].lon, name: geo[0].name }; } run().catch(err => { console.error(err.message); process.exit(1); });
package.json (optional — auto-installed if present)
{
"name": "weather-lookup",
"type": "module"
}
This skill uses only Bun's built-in fetch, so no third-party packages are needed. If your skill does need npm packages, list them in package.json and ScalyClaw will run bun install automatically before the first execution.
Python Skill — Text Analysis
This skill analyses a block of text and returns readability metrics, word frequency, and a sentiment estimate. It demonstrates reading parameters from stdin JSON and using a requirements.txt for dependencies.
Folder structure:
SKILL.md
--- name: text-analysis description: Analyses text and returns readability scores, word frequency, and sentiment. script: main.py language: python install: uv pip install -r requirements.txt --- ## When to use Use when the user wants to understand the structure, reading level, or sentiment of a document, email, article, or any chunk of text. ## Input Receives a JSON object via stdin: - `text` (string, required): The text to analyse. UTF-8, up to ~50,000 characters. - `top_words` (number, optional): How many top words to include. Defaults to 10. ## Output JSON with `readability` (Flesch–Kincaid grade and ease scores), `sentiment` (positive/negative/neutral + score), and `top_words` (list of word + count pairs).
main.py
import sys import json import os from collections import Counter import re import textstat from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer # ScalyClaw writes the full parameter object to stdin as JSON params = json.load(sys.stdin) text = params.get("text") top_words = int(params.get("top_words", 10)) if not text: print(json.dumps({"error": "Missing required parameter: text"})) sys.exit(1) # Readability fk_grade = textstat.flesch_kincaid_grade(text) fk_ease = textstat.flesch_reading_ease(text) # Sentiment (VADER works well on short to medium texts) analyzer = SentimentIntensityAnalyzer() scores = analyzer.polarity_scores(text) compound = scores["compound"] sentiment = ( "positive" if compound >= 0.05 else "negative" if compound <= -0.05 else "neutral" ) # Word frequency (lowercased, stripped of punctuation) words = re.findall(r"\b[a-zA-Z]{3,}\b", text.lower()) stopwords = {"the", "and", "for", "that", "this", "with", "are", "was"} words = [w for w in words if w not in stopwords] counter = Counter(words) result = { "readability": { "flesch_kincaid_grade": fk_grade, "flesch_reading_ease": fk_ease, "grade_label": f"Grade {round(fk_grade)}", }, "sentiment": { "label": sentiment, "score": compound, "detail": scores, }, "top_words": [ {"word": w, "count": c} for w, c in counter.most_common(top_words) ], "word_count": len(words), } # Write JSON to stdout — ScalyClaw captures this as the skill result print(json.dumps(result, indent=2))
requirements.txt
textstat==0.7.3 vaderSentiment==3.3.2
Bash Skill — System Info
Bash skills are ideal for lightweight shell operations that need no dependencies. Parameters arrive as a JSON object on stdin; parse them with a tool like jq. Stdout is the result.
SKILL.md
--- name: system-info description: Reports CPU load, memory usage, and disk space on the host machine. script: run.sh language: bash --- ## Input Receives a JSON object via stdin: - `disk_path` (string, optional): Filesystem path to check disk usage for. Defaults to "/".
run.sh
#!/usr/bin/env bash set -euo pipefail # Parameters arrive as JSON on stdin — parse with jq INPUT=$(cat) DISK_PATH=$(echo "$INPUT" | jq -r '.disk_path // "/"') # CPU load averages (1m, 5m, 15m) LOAD=$(uptime | awk -F'load average:' '{print $2}' | xargs) # Memory (Linux-compatible) MEM_TOTAL=$(awk '/MemTotal/ {print $2}' /proc/meminfo 2>/dev/null || sysctl -n hw.memsize) MEM_FREE=$(awk '/MemAvailable/ {print $2}' /proc/meminfo 2>/dev/null || vm_stat | awk '/Pages free/ {print $3}') # Disk usage for requested path DISK=$(df -h "$DISK_PATH" | awk 'NR==2 {printf "%s used of %s (%s)", $3, $2, $5}') # Emit JSON to stdout printf '{"load_average": "%s", "disk": "%s"}\n' "$LOAD" "$DISK"
Deployment
There are three ways to deploy a skill. All three result in the skill being immediately available after reload — you do not need to restart any process.
Method 1: Drop a Folder
The simplest method. Place the skill folder directly inside ~/.scalyclaw/skills/. ScalyClaw watches this directory and picks up new folders automatically via the hot-reload mechanism.
# Copy a skill folder into the skills directory cp -r ~/my-skills/weather-lookup ~/.scalyclaw/skills/ # ScalyClaw detects the new folder and publishes the reload signal automatically. # You can also trigger reload manually: redis-cli PUBLISH scalyclaw:skills:reload ""
Method 2: Upload a Zip via Dashboard
In the dashboard, navigate to Skills and click Upload Skill. Select a .zip file containing your skill folder. ScalyClaw extracts it into ~/.scalyclaw/skills/, validates the manifest, installs dependencies, and triggers a hot-reload — all in one step. This is the recommended method for remote deployments where you do not have shell access to the host.
The zip file must contain the skill folder as its top-level directory — not a flat listing of files. Correct: weather-lookup/SKILL.md. Incorrect: SKILL.md at the zip root. Most operating systems produce the correct structure when you right-click a folder and choose "Compress".
Method 3: AI Self-Creation via execute_code
ScalyClaw can write and deploy skills entirely on its own. When asked to do something it cannot currently do, it uses the execute_code tool to write the skill files, saves them into ~/.scalyclaw/skills/, and triggers the reload signal — making the skill available without any human involvement. See the Advanced section for details on how this works.
Hot Reload
When a skill is added, modified, or removed, ScalyClaw does not need to restart. The process subscribes to the scalyclaw:skills:reload Redis pub/sub channel at startup. Any process that writes a new or updated skill publishes a message to this channel, and all running instances reload their in-memory skill manifests immediately.
// How ScalyClaw reloads skills on pub/sub signal (simplified) subscriber.on("message", async (channel) => { if (channel === "scalyclaw:skills:reload") { // Re-read all SKILL.md files from disk const skills = await loadAllSkillManifests("~/.scalyclaw/skills"); // Update the in-memory registry used by the skill section of the system prompt skillRegistry.set(skills); // Next LLM call will include the updated skill list in the system prompt console.log(`[skills] Reloaded ${skills.length} skills`); } });
Automatic Dependency Installation
If a dependency file is found alongside the skill's entry point, ScalyClaw installs dependencies automatically before executing the skill for the first time. This happens in the worker sandbox and does not require any manual steps.
| Language | Dependency file | Install command |
|---|---|---|
| JavaScript | package.json |
bun install |
| Python | pyproject.toml |
uv sync |
| Python | requirements.txt |
uv pip install -r requirements.txt |
| Rust | Cargo.toml |
cargo build --release |
Dependencies are installed into the skill folder itself, not globally, so skills are fully isolated from each other and from the host system. Subsequent runs skip the install step unless the dependency file changes.
Advanced
Input / Output Convention
All skills share the same I/O contract regardless of language. The AI assembles the parameters it wants to pass and ScalyClaw serialises them to JSON, which is written to the skill process's stdin. The skill reads and parses stdin at startup, does its work, and writes a single JSON object to stdout as its result. Anything written to stderr is captured as diagnostic output and is not returned to the AI as a result.
| Channel | Direction | Content |
|---|---|---|
| stdin | ScalyClaw → skill | JSON object containing all parameters passed by the AI |
| stdout | skill → ScalyClaw | JSON object (the skill result returned to the AI) |
| stderr | skill → ScalyClaw | Diagnostic / error text; logged but not returned to the AI |
| env vars | ScalyClaw → skill | SKILL_SECRET_* (vault secrets) and WORKSPACE_DIR |
Example: reading stdin in each supported language.
// JavaScript — read stdin, parse JSON, write result to stdout const params = JSON.parse(await new Response(process.stdin).text()); const { query, max_results = 5 } = params; // ... do work ... console.log(JSON.stringify({ results }));
import sys, json params = json.load(sys.stdin) query = params["query"] max_results = params.get("max_results", 5) # ... do work ... print(json.dumps({"results": results}))
# Bash — read stdin, parse with jq INPUT=$(cat) QUERY=$(echo "$INPUT" | jq -r '.query') MAX_RESULTS=$(echo "$INPUT" | jq -r '.max_results // 5') # ... do work ... printf '{"results": []}\n'
Long-Running Skills with Progress Output
Skills that take more than a few seconds can emit incremental progress updates by writing lines prefixed with PROGRESS: to stdout before the final JSON result. ScalyClaw forwards these lines to the AI as intermediate tool output, which can relay progress status to the user while the skill continues running.
async function run() { // Emit progress lines — ScalyClaw forwards these to the AI in real time console.log("PROGRESS: Fetching data from source..."); const rawData = await fetchData(); console.log("PROGRESS: Processing 1,200 records..."); const processed = await processRecords(rawData); console.log("PROGRESS: Generating report..."); const report = buildReport(processed); // Final result — the last JSON object on stdout is the skill's return value console.log(JSON.stringify(report)); }
Self-Created Skills
One of ScalyClaw's most powerful features is that the AI can write and deploy skills on its own. When asked to perform a task that requires a persistent, reusable capability, the AI uses the execute_code tool to write the skill files directly into ~/.scalyclaw/skills/ and then publishes the reload signal to make them available immediately.
For example, if you ask ScalyClaw to "always check my stock portfolio when I ask about markets", it might:
- Write a
SKILL.mdmanifest describing aportfolio-checkskill. - Write an
index.jsthat calls your brokerage API using a secret it asks you to store in the vault. - Save both files to
~/.scalyclaw/skills/portfolio-check/. - Publish to
scalyclaw:skills:reloadso the skill is immediately available. - Invoke the new skill via
execute_skillto answer your current question in the same conversation turn.
This means skills can grow organically from conversations without you ever opening a code editor. The AI writes the code, tests it by invoking it, and fixes any errors — all within the same message thread.
All skill code — whether written by you or by the AI — passes through the Skill Guard before it is executed. The guard is a separate LLM call that inspects the code for dangerous operations: unrestricted filesystem access, network calls to unexpected hosts, attempts to read secrets outside the declared secrets list, or shell injection patterns. If the guard rejects the code, the skill is not executed and you are informed of the reason.
Security Model
Skills run in an isolated sandbox inside the Worker process. The sandbox applies the following constraints by default:
- Filesystem — read/write access is limited to the skill's own folder and a temporary scratch directory (
/tmp/scalyclaw-skill-{id}). Attempts to access files outside these paths are blocked at the OS level. - Network — outbound HTTP/HTTPS is allowed. Raw TCP and UDP outside of standard ports require explicit declaration in the manifest (
allowedPorts). There is no inbound network access. - Secrets — vault secrets are injected as
SKILL_SECRET_*environment variables and are the only way for a skill to receive sensitive credentials. The skill cannot access other vault secrets, the Redis instance, or any ScalyClaw internal state. - Timeout — enforced at the process level. A skill that hangs is killed after the configured timeout. The AI receives a timeout error and can retry or report the failure to the user.
- Resource limits — CPU and memory limits are applied per-skill via system controls to prevent a runaway skill from starving the worker of resources.
Store API keys and tokens in the vault — never hardcode them in skill files. ScalyClaw reads secrets from scalyclaw:secret:{name} in Redis and injects them as environment variables before execution. The secret named my_api_key becomes SKILL_SECRET_MY_API_KEY in the skill's environment. This way secrets are resolved at runtime, never written to disk, and the SKILL.md itself contains no sensitive values.
Here is an example of a manifest that correctly declares its secret dependencies:
--- name: github-pr-summary description: Fetches and summarises open pull requests from a GitHub repository. script: index.js language: javascript install: bun install --- ## Input Receives a JSON object via stdin: - `repo` (string, required): GitHub repository in "owner/repo" format. - `state` (string, optional): Filter PRs by state — "open", "closed", or "all". Defaults to "open". - Requires vault secret `github_token` (injected as SKILL_SECRET_GITHUB_TOKEN).
And the corresponding entry point reading the injected secret:
// Parameters delivered as JSON via stdin const params = JSON.parse(await new Response(process.stdin).text()); const repo = params.repo; const state = params.state ?? "open"; // Secret injected from vault — never hardcoded const token = process.env.SKILL_SECRET_GITHUB_TOKEN; const res = await fetch( `https://api.github.com/repos/${repo}/pulls?state=${state}&per_page=20`, { headers: { Authorization: `Bearer ${token}`, Accept: "application/vnd.github+json" } } ); if (!res.ok) { console.error(`GitHub API error: ${res.status} ${res.statusText}`); process.exit(1); } const prs = await res.json(); const summary = prs.map(pr => ({ number: pr.number, title: pr.title, author: pr.user.login, created: pr.created_at.slice(0, 10), url: pr.html_url, })); console.log(JSON.stringify({ count: summary.length, pull_requests: summary }, null, 2));