مرکزی مواد پر جائیں

Install & Author Agent Skills

What You Will Learn

A skill is a portable cross-runtime spec. The same SKILL.md folder you install today runs unchanged across OpenClaw, Claude Code, OpenCode, Cursor, and 50+ other runtimes that adopted the AgentSkills specification. Before you install anything, the next section explains what a skill actually is: a folder, a frontmatter contract, a body that loads only when its description matches your message.

You will prove the cross-runtime claim by installing one skill in Claude Code first (in the terminal where you already work), watching it visibly change behaviour, then watching the same SKILL.md fire in your OpenClaw Employee through WhatsApp. From there you author your own with Anthropic's skill-creator, then meet Skill Workshop: the feature that captures procedural memory from how you correct your Employee and turns those corrections into durable SKILL.md files. Skills you install today will work in tools you have not learned yet.

If you came to this lesson directly, all you need to follow along is a working OpenClaw Employee from Lesson 2. The brain customization from Lesson 4 (SOUL.md, IDENTITY.md, USER.md) and the memory system from Lesson 5 are referenced for continuity but never required to install or use a skill.

What is an Agent Skill?

The official definition from agentskills.io: "Agent Skills are a lightweight, open format for extending AI agent capabilities with specialized knowledge and workflows." The format was originally developed by Anthropic and released as an open standard; dozens of agent products have since adopted it.

An analogy. Imagine your Employee has a shelf of laminated playbooks. Each playbook has a title on the spine: "handle a customer refund", "draft a meeting summary", "convert USD to PKR". Inside each one is the procedure for that situation. Your Employee never reads every playbook in every conversation; that would be exhausting and pointless. It scans the spines, decides which one fits the moment, opens that one, and follows the procedure. When the situation passes, the playbook goes back on the shelf.

That is what an Agent Skill is. A playbook with a title (so your Employee knows when to grab it), a procedure (so it knows what to do), and optional tools bundled in the same binder for the steps that need a calculator or a script. You install playbooks once; your Employee reads spines all day and only opens the one that matches the question in front of it.

A shelf of laminated playbooksYour Employee scans every spine, every turn. Bodies stay closed until the situation calls.Shelf (your installed skills)Meeting SummaryUSD to PKRRefund ReplyDaily StandupCode ReviewSpines (titles + descriptions) load every turn"Summarize my standup"your message arrivesmatchDaily Standup playbook (open)title (description)procedure (body):1. yesterday's wins2. today's plan3. blockers + owners4. format as bulletsscripts/references/Body loads only when the spine matchesA skill is a playbook. Your Employee reads spines all day; bodies open only on match.

Cross-runtime is the second half of the story. Every agent runtime that adopts the AgentSkills spec knows how to read the same playbook folder. One SKILL.md works unchanged in OpenClaw, Claude Code, OpenCode, Cursor, GitHub Copilot, and dozens of other runtimes; the playbook moves with you, not the other way around. This lesson proves that claim by installing one skill in Claude Code first, watching it visibly change behaviour, then watching the same skill fire in your OpenClaw Employee through WhatsApp. Section 4 is where you crack a real playbook open and read its anatomy. Section 5 is where you watch your Employee scan the spines in real time.

One file. Many runtimes.Skills are a portable spec, not OpenClaw branding.name + descriptionbody markdownscripts/ refs/SKILL.mdOpenClawClaude CodeOpenCodeCursorGitHub CopilotClineCodexWarp+ many moreWrite the SKILL.md once. It runs wherever AgentSkills runs.

A skill is programming in English. The frontmatter is the trigger contract; the body is the recipe; the optional scripts/ folder is where the 7 principles you used in the Agentic Coding Crash Course (bash, code, verification, decomposition) live as deterministic helpers the agent reaches for. The agent is the runtime that reads English and executes.

skills.sh is the broader cross-runtime registry; ClawHub is the OpenClaw-curated alternative. Each skill on skills.sh shows weekly install count, GitHub stars, and three independent security scanner verdicts (Gen Agent Trust Hub, Socket, Snyk) before you install. Skills are scanned artifacts with provenance, not raw markdown from a gist. The dangerous-code scanner blocks critical findings from skills.sh and ClawHub by default.

Want your coding agent to do the typing?

This lesson uses Claude Code (or OpenCode) as the learning lens: it's where you'll see the skill fire, read the SKILL.md, and author your own. You can also have it run the install commands for you. Open a terminal in the openclaw-employee/ folder from Lesson 2, start your coding agent, and paste this once:

I'm on Lesson 6. Help me pick a skill from skills.sh for [my real wish], install it for both Claude Code and OpenClaw, and walk me through the SKILL.md by hand so I read the standard myself.

The agent fetches the file; it cannot understand it for you. Read every SKILL.md it shows you. The lesson below is the manual path: every command spelled out. You can mix the two.

Section 1: Pick a real wish on skills.sh

Open skills.sh in your browser. Type a real wish in the search box: something your Employee cannot do yet that you actually want it to do. Wishes that work well in week one include a meeting summary, a customer reply draft, a marketing campaign brief, a code review checklist, an expense classification, a daily standup, or a one-page research summary.

Click the top result that fits. Read the skill's summary. Skim the SKILL.md preview right on the page (you will read it more carefully in Section 4). The skill page shows the install command literally. Copy it.

Vet before you install

Treat skills.sh like any package registry: a skill is third-party code (or close to it) that runs in your agent's context. Before you copy the install command, check four signals on the listing page:

  1. Author: is it from a recognized publisher (Anthropic, a company you know, a developer with other well-installed skills) or an anonymous handle with one upload?
  2. Weekly installs and GitHub stars: high numbers are not proof of safety, but a skill nobody else installs is a skill nobody else has audited.
  3. Security scanner verdicts: skills.sh runs three independent scanners (Gen Agent Trust Hub, Socket, Snyk) and shows each verdict on the listing. All three should read Pass. A "Critical" finding blocks install by default.
  4. The SKILL.md preview itself: skim what the skill actually does before you let your agent run it.

Whatever you pick, the install shape is the same: an npx skills add line with the repo URL (and optionally --skill <name> if the repo bundles several). If the listing has a fishy author, low installs, and any scanner warning, walk away and find another.

You're done with this section when: you have a copied install command and a clear sense of what the skill is supposed to do.

Section 2: Install across both runtimes

Open a terminal in the openclaw-employee/ folder from Lesson 2. That folder is your workspace for the rest of this lesson; everything you install lands somewhere you can navigate to with ls.

Paste the install command from Section 1. The CLI opens an interactive multi-select. The "Universal targets" group is pre-checked (around 13 runtimes by default, including Amp, Antigravity, Cline, Codex, Cursor, OpenCode, and others; the lineup grows as new runtimes opt in). Below that is an additional list. Manually check Claude Code AND OpenClaw there, then confirm.

The next prompt asks Project vs Global scope:

  • Project drops the skill into the current folder's .claude/skills/ (Claude Code) and skills/ (OpenClaw). Visible right there. Only fires when Claude Code runs from this folder, or when the OpenClaw gateway treats this folder as its workspace.
  • Global drops the skill into your home directory (~/.claude/skills/ and ~/.openclaw/skills/) so every Claude Code session and your Employee daemon picks it up regardless of which folder you cd into.

Pick Project for now. The whole point of Sections 3 to 5 is to make a skill tangible: you want to see the SKILL.md sitting next to your AGENTS.md and CLAUDE.md files inside openclaw-employee/. You can move it to Global later by re-running the install. (Section 6's authored skill also goes to Project scope for the same reason.)

After install, verify the skill landed in your folder:

ls .claude/skills/
ls skills/

You should see a folder named after the skill in both places. Same SKILL.md, two locations on disk, one for each runtime.

You're done with this section when: the skill folder is present in both .claude/skills/<name>/ and skills/<name>/ inside your openclaw-employee/ folder.

Promoting from Project to Global later

Once you have used a skill enough times to want it in every project on your machine, re-run the same install with Global scope:

npx skills add <same-repo-url>
# multi-select: Claude Code + OpenClaw
# scope: Global this time

Fresh copies land in ~/.claude/skills/<name>/ and ~/.openclaw/skills/<name>/. Optionally remove the Project copies so you don't have two:

rm -rf .claude/skills/<name> skills/<name>

OpenClaw's skill watcher (skills.load.watch is on by default with a 250ms debounce) picks up the new Global drop automatically; no restart needed. We confirm in Section 7 when we light up OpenClaw for the cross-runtime proof.

One thing to know: workspace beats Global on collisions (six tiers covered in Section 8). If you keep both copies, anything you do from openclaw-employee/ reads the Project copy and ignores Global. Useful for project-specific overrides; confusing if you forget the Project copy is still there.

Section 3: Try in Claude Code first

The fastest cross-runtime proof is in the terminal where your coding agent already runs. Claude Code (or OpenCode) is the learning lens for one reason: when the skill fires, you see the structurally different output one second later, and you read the SKILL.md that produced it without leaving the terminal. OpenClaw plumbing (gateway, channels, log tails) sits in the way; we get to it next.

From your project folder, start your coding agent:

claude

Then in the prompt:

/<skill-name> <real input from your domain>

(Substitute the skill's slug from the install you just ran. If unsure, list the folder names in ~/.claude/skills/ (or .claude/skills/ for Project scope). The folder name IS the slug.)

Provide a real input. Watch the response. A generic question without the skill produces a generic answer; the same question with the skill loaded produces output that visibly follows the skill's structure: specific sections, specific decision rules, specific format.

Three ways your Employee can use a skill

The slash command you just typed is one of three invocation modes. Each fits a different moment, and the lesson uses all three:

  1. Auto-activation (the default). When you DM your Employee in normal English, the gateway compares your message to every installed skill's description. If one matches, the body loads automatically. No slash, no thinking about it. This is the right mode for everyday use.
  2. Explicit /skill-name (what you just did). Forces the skill to load whether the description matched or not. Useful when you are testing, authoring, or when you know exactly which playbook applies and want to skip the matching step. (In Claude Code and OpenCode the form is /<skill-name>; in OpenClaw, when you DM your Employee from a paired channel in Section 7, the form is /skill <name>. Same idea, different prefix.)
  3. Pinned to your Employee's brain. Lesson 4 customized SOUL.md, IDENTITY.md, USER.md, MEMORY.md. You can add a line to MEMORY.md or your identity files like "When I ask for a meeting summary, always use the meeting-summary skill." The skill becomes part of who the Employee is; it does not have to rediscover the skill from the description every conversation.

Q: My Employee is not using a skill I installed. What now? Two likely causes. First, the description does not match how you phrase your request. Read the SKILL.md description (Section 4 below) and either rephrase your message to match its language, or sharpen the description if you authored the skill. Second, you want it always-on but never told the Employee. Pin it via brain customization: edit MEMORY.md (or your USER.md / IDENTITY.md from Lesson 4) and name the skill explicitly for the situations where it should fire. The third lever is /skill-name itself, which always works regardless of description matching.

You're done with this section when: Claude Code's response is structurally different from a no-skill response, and you can describe in one sentence what the skill changed.

Section 4: Read the SKILL.md by hand

Now make the artifact concrete. The skill on disk is a folder with this shape:

my-skill/
├── SKILL.md # required: metadata + instructions
├── scripts/ # optional: executable code
├── references/ # optional: extra documentation
└── assets/ # optional: templates, resources

From inside your openclaw-employee/ folder, open the SKILL.md:

cat .claude/skills/<name>/SKILL.md

The OpenClaw copy is at skills/<name>/SKILL.md in the same folder. Same SKILL.md, two runtimes, identical contents. (If you went Global instead, the paths are ~/.claude/skills/<name>/SKILL.md and ~/.openclaw/skills/<name>/SKILL.md respectively.)

The frontmatter

Two fields are required:

---
name: research-brief
description: "Use this skill when the user asks for a one-page research summary, a paper digest, or a literature brief. Produces a structured output with key findings, methodology, and limitations."
---

name is lowercase hyphen-case. description is the trigger. When a user message arrives, the gateway compares it against every installed skill's description and only loads the body of skills that look relevant. A vague description never fires; a sharp one fires exactly when it should.

Optional metadata.openclaw gates can add OpenClaw-specific constraints. The frontmatter parser only supports single-line keys, so metadata is a single-line JSON object:

metadata:
{
"openclaw":
{
"requires": { "bins": ["jq", "curl"] },
"os": ["darwin", "linux"],
"primaryEnv": "ANTHROPIC_API_KEY",
},
}

Common gates: requires.bins (binaries that must exist on the system), requires.anyBins (any one of a list), requires.env (environment variables), requires.config (OpenClaw config keys), os (darwin, linux, win32), primaryEnv (the env var the skill warns about if missing). Two more gates control invocation: user-invocable: false hides the skill from slash-command menus (model-only), and disable-model-invocation: true does the inverse. A skill that fails any gate simply does not load at startup.

The body structure

Below the frontmatter is markdown. Operational instructions, decision rules, output format. Optional folders sit alongside SKILL.md:

research-brief/
├── SKILL.md # required
├── scripts/ # optional: code the agent can call
└── references/ # optional: docs the agent reads on demand

The scripts/ folder holds executable code (Python, Bash, JavaScript) the agent invokes for deterministic work. The references/ folder holds extra documentation the body links to. Most useful skills do not need either; markdown alone does the work.

Agent-native thinking lands here

You just read English-plus-7-principles. The description is the trigger: the gateway compares it to your message and only loads the body when it looks relevant. The body is the recipe: operational instructions, decision rules, output format. The optional scripts/ folder is where deterministic helpers live (bash, Python, Node), the 7 principles you used in the Agentic Coding Crash Course, packaged so the agent can call them without retyping. This is programming in English. The agent is the runtime.

You're done with this section when: you can name three frontmatter fields without looking AND you can explain why the description is separated from the body.

Section 5: See progressive disclosure live

The fastest way to see progressive disclosure is to do it twice in Claude Code (or OpenCode): once with a message that should NOT match the skill, then once with a message that should. Compare the outputs.

Send a generic question that does not fit the skill's description. The reply is generic; nothing pulled the playbook off the shelf. Now send a question that does match the description (the same kind of question you used in Section 3). The reply is skill-shaped: specific sections, specific decision rules, specific format. The body loaded for that one turn and steered the response. That is progressive disclosure in action: the spine got read every turn, the body opened only when the situation called for it.

The official spec frames it as three stages:

  1. Discovery: at session start, only the name and description of every skill load. Just enough for your agent to know when each one might be relevant. Your Employee is reading the spines.
  2. Activation: when a user message matches a description, the full SKILL.md instructions load into context. Your Employee just pulled that playbook off the shelf.
  3. Execution: the agent follows the instructions, optionally executing bundled code or loading referenced files as needed. Your Employee is working through the procedure.

The cost: at session start, only the name plus description is injected into the system prompt for each installed skill. The system prompt baseline is 195 characters, and each registered skill adds roughly 97 characters of frontmatter (name plus description plus location). That works out to about 24 tokens per skill at startup. Bodies stay asleep until your message wakes them.

Progressive disclosure: pay only for what fires.Session startOpenClaw snapshots all eligible skillsname · descriptionname · descriptionname · description+24 tokens per skillbodies stay asleepno body cost until matchmatchYour message arrivesdescription matches the message intentname · descriptionbody markdownscripts/ + references/decision rulesoutput formatbody loads into contextuseAgent respondsbody steers the responsesection: findingssection: methodologysection: limitationsstructured outputreleased after the turnBodies stay asleep until your message wakes them.

You're done with this section when: you have seen one no-match reply and one match reply in Claude Code, and you can describe in one sentence why the body loaded only for the second one.

Section 6: Author your own with skill-creator

You have installed someone else's skill, watched it fire, and read its anatomy. Now write one yourself, while you are still inside the coding-agent terminal where the iteration loop is fastest.

Anthropic's skill-creator is itself a SKILL.md. It conducts conversational authoring: rough intent in, tested SKILL.md out. That same Anthropic skills repo at github.com/anthropics/skills ships around 18 other reference skills (slide creators, doc tools, brand templates) that double as worked examples of the format.

Install it:

npx skills add https://github.com/anthropics/skills --skill skill-creator

Multi-select Claude Code AND OpenClaw. Pick Project scope so you can iterate on it without polluting Global.

Pick a real workflow you do every day or every week: daily standup notes, customer reply template, code review checklist, expense classification, meeting summary, weekly status report. Whatever recurs.

Now invoke skill-creator from inside your coding agent. Same Claude-Code-first pattern as the install:

claude

Then in the prompt:

/skill-creator

skill-creator walks a fixed sequence: state the intent (when should this skill trigger?) → draft the description (the matcher, drafted first because it decides whether the skill triggers) → draft the body → write three real example inputs → run them through the draft → refine the description so it triggers reliably → ship.

What this looks like in practice: it asks you the intent, you state it. It drafts a description and asks if it captures the moments the skill should trigger. You refine. It drafts the body. It asks for three example inputs (real meeting notes, real customer messages, real expense receipts). You provide them. It runs them through the draft and shows you the output. You correct what is wrong. The description that was almost right gets sharpened. The body that missed a step gets the step added. Two or three rounds usually converges.

The final SKILL.md materializes in BOTH the OpenClaw workspace skills directory AND the Claude Code project skills directory simultaneously: skill-creator writes to both locations on ship. Test it once more in Claude Code with a real input to confirm the description triggers cleanly and the body produces the structure you wanted.

You did not port. You did not reimplement. You wrote one folder, and two different agent platforms can now read it the same way. Section 7 is where you prove the second platform reads it: you'll send a real input from WhatsApp and watch your Employee answer with the same structure.

You're done with this section when: your authored skill produces consistent structured output for a real input from your workflow in Claude Code, and the SKILL.md folder exists in both your Claude Code skills directory and the OpenClaw workspace skills directory.

Section 7: Cross-runtime proof via WhatsApp

Two skills are now installed: the one you picked from skills.sh in Section 2, and the one you authored in Section 6. Both live in OpenClaw too. Time to prove the cross-runtime claim from your phone.

OpenClaw watches skill folders by default (skills.load.watch is on with a 250ms debounce), so any SKILL.md you dropped during Sections 2 and 6 has already been picked up. Confirm:

openclaw skills list

Both skills should appear with their location tiers. (If one is missing, the workspace tier is not being watched. Either re-run from openclaw-employee/, append the path with openclaw config set skills.load.extraDirs --append "$(pwd)/skills", or fall back to openclaw gateway restart.) Now open WhatsApp (or Telegram, Discord, or the dashboard: whichever channel you paired in Lesson 2) and DM your Employee the same kind of input you used in Section 3:

/skill <name> <input>

Watch your Employee respond with the same skill behaviour you saw in Claude Code. Same SKILL.md folder. Two runtimes. Same result. Now do it again with the skill you authored in Section 6. Both skills produce structurally similar responses on both platforms because the spec is what they are reading, not the runtime.

(Discord shows autocomplete for skill arguments. Telegram and Slack show button menus. Specific UI follows each platform's bot SDK and may shift over time.)

You're done with this section when: both the installed skill and your authored skill produce structurally similar responses in Claude Code (Section 3) and your OpenClaw Employee on a paired channel.

Section 8: Six-tier precedence

Run openclaw skills list again. Each row shows the location tier where the skill lives.

Six tiers, top wins on name collision: <workspace>/skills/, <workspace>/.agents/skills/, ~/.agents/skills/, ~/.openclaw/skills/ (your Global install lands here), Bundled (ships in the OpenClaw npm package), plus skills.load.extraDirs and plugin skills at the bottom. Drop a same-named SKILL.md into a project's skills/ folder and it overrides everything else for that workspace.

Six tiers. Highest wins on collision.OpenClaw checks each tier in order at session start.1<workspace>/skills/your active workspace, highest priorityyou2<workspace>/.agents/skills/this workspace, agent-scopedthis workspace3~/.agents/skills/all agents, your machine, personalall your agents4~/.openclaw/skills/all agents, managed install (Global)all your agents5Bundledships in the OpenClaw npm packageeveryone6skills.load.extraDirs + plugin skillsexplicitly added directories or shipped by pluginsconfiguredWorkspacebeats homebeats bundled.Your workspace is the override layer. Drop a SKILL.md there and it wins.

The skill you installed in Section 2 with Global scope landed in tier 4 (~/.openclaw/skills/). If you ever want to override one specific skill for a specific project, drop a same-named SKILL.md into that project's skills/ folder (tier 1) and it wins.

(Per-agent allowlists, the lever for keeping a customer-facing agent in its lane while a private one has full access, are covered in Lesson 11 when you spawn a second agent. They build on top of these six tiers but are a separate concern.)

You're done with this section when: you can name the precedence layer your installed skill lives in AND you can explain in one sentence why dropping a SKILL.md into <workspace>/skills/ overrides everything else for that workspace.

Section 9: Skill Workshop captures procedural memory automatically

Every Employee user runs into the same complaint sooner or later: "I just taught it the same thing for the third time. Why doesn't it remember?" The fix lives below.

Skill Workshop is procedural memory for workspace skills. From the OpenClaw docs: it lets an agent turn reusable workflows, user corrections, hard-won fixes, and recurring pitfalls into SKILL.md files automatically. It is a plugin, disabled by default; enable it with one config command.

It watches for three signals:

  1. Direct invocation of the skill_workshop tool by your agent.
  2. Heuristic detection of phrases like "next time", "from now on", "remember to", "always include", and similar patterns.
  3. An LLM reviewer that analyses recent conversation turns periodically and proposes a skill if the pattern looks reusable.

Enable it:

openclaw config set plugins.entries.skill-workshop.enabled true
openclaw gateway restart
openclaw plugins list

skill-workshop shows up enabled. Now DM the Employee a real correction. Examples that trigger reliably:

From now on, when I ask for a meeting summary, always include
action items at the top with owner names and a deadline. Then
the discussion below in bullet form.

Or:

Next time I send you a customer email to handle, draft three
replies: short, neutral, and warm. Mark which one you recommend.

Or:

Remember to convert all USD figures to PKR using today's rate
when I send you any financial document.

The Employee replies normally. But behind the scenes, Skill Workshop's heuristic detector picks up the correction phrase. The LLM reviewer analyses the recent turns. A proposal queues into the pending state.

Manage proposals through the skill_workshop tool actions:

skill_workshop list_pending
skill_workshop inspect <proposal-id>
skill_workshop apply <proposal-id>
skill_workshop reject <proposal-id>
skill_workshop list_quarantine

Run list_pending and you see the proposal. Run inspect <proposal-id> and the agent shows you the draft SKILL.md: frontmatter, description, body, the example interaction it learned from. Walk it together. If the description is sharp and the body is right, run apply <proposal-id>.

The new SKILL.md persists in <workspace>/skills/<auto-named>/ (tier 1, the highest precedence). The watcher reloads it within ~250ms; no restart needed. Next time you describe the same situation, your Employee uses the captured skill instead of needing the correction again.

Skill Workshop: corrections become durable skills.You correct your Employeenatural language phrases"next time…"Workshop detectsheuristic + LLM reviewercaptured turnProposal queuedpending reviewlist_pendingYou approveSKILL.md persists in workspaceskill_workshop applyphrase triggerdraft proposalyou reviewnext correction is automaticSKILL.mdyour procedural memoryThe Employee learns from how you correct it.

A few details worth knowing:

  • Pending vs auto mode. The default is pending: every proposal queues for your review. You can set it to auto, where Workshop applies safe proposals automatically. Pending is the right default for the first few weeks; switch to auto once you trust what it captures.
  • Quarantine. Workshop runs every proposal through the dangerous-code scanner. Critical findings move the proposal to quarantine instead of pending. Run skill_workshop list_quarantine to see them, skill_workshop inspect to read why, and decide whether to rescue or reject.
  • Where it writes. Workshop writes ONLY to workspace/skills/<name>/ (tier 1, the highest precedence). It does not touch tier 4 managed skills, bundled skills, or anywhere else. Your hand-authored skills and Workshop-captured skills coexist in the same workspace folder.
  • What it does not do. Workshop captures and proposes. It does not scaffold, it does not test, it does not package, it does not publish. Authoring (Section 6) and publishing (Section 11) are separate workflows.

You're done with this section when: a Workshop proposal has been applied AND the next time you trigger the same situation, the Employee acts on the new skill without you correcting it again.

Section 10: ClawHub and find-skills: registries and meta-discovery

ClawHub is the OpenClaw-curated registry at clawhub.ai. OpenClaw-native CLI:

openclaw skills search <query>
openclaw skills install <slug>
openclaw skills list # the watcher already picked it up
openclaw skills update --all # later, to refresh installed ClawHub skills

The skill installs into workspace/skills/ (tier 1). ClawHub's CLI is OpenClaw-native, so installs land in the workspace by default. Pre-scanned for unsafe code on publish: critical findings from the dangerous-code scanner block install by default. The install fails closed, not warns. Override only after reviewing the source:

openclaw skills install <slug> --dangerously-force-unsafe-install

Two registries, one spec: skills.sh for breadth and cross-runtime, ClawHub for OpenClaw-curated and pre-scanned. Pick by use case. The skill itself is portable either way.

Now meet the meta-discovery layer. find-skills is a meta-skill: a skill whose job is finding other skills. It searches across registries and ranks results by what matches your intent. Install it:

npx skills add https://github.com/anthropics/skills --skill find-skills

In the multi-select, check Claude Code AND OpenClaw. Pick Global scope. The watcher picks it up automatically; no restart needed. Use it from your Employee or Claude Code:

/find-skills <a problem statement, in your own words>

90,000+ skills exist on skills.sh alone, plus tens of thousands on ClawHub. Most of the boring work in your week is already a SKILL.md somebody wrote. find-skills is how you find it.

You're done with this section when: find-skills has surfaced at least one skill you didn't know existed AND at least one ClawHub-installed skill has been triggered.

Section 11: (Optional) Publish your authored skill

Optional. No completion gate. Skip if you are tired.

If you authored a useful skill in Section 6 or Workshop captured one in Section 9 that you think others would benefit from, you can publish it.

For skills.sh distribution, push the skill folder to a GitHub repo:

cd ~/.openclaw/skills/<your-skill>
git init
git add SKILL.md
git commit -m "Initial commit of <your-skill>"
git remote add origin <your-repo-url>
git push -u origin main

Once it is public, anyone can install it with:

npx skills add <your-handle>/<your-repo>

For ClawHub distribution (publisher account required), use the publisher CLI:

clawhub sync --all
clawhub skill rescan <slug> # if a false positive flagged your skill

The dangerous-code scanner runs at publish time. If a scan flags your skill incorrectly, request a rescan.

The lesson is complete without it. But the moment you publish a skill someone else installs is the moment the cross-runtime spec stops being abstract.

You're done with this section when: your skill is installable by someone else with one command.

Treat third-party skills as untrusted code

Read the SKILL.md before enabling. Workspace and extra-dir skill discovery rejects symlinks pointing outside configured roots. Critical findings from the dangerous-code scanner block install by default; override only after reviewing the source. Environment variables and API keys consumed by skills run in the host process, not in a sandbox. Keep secrets out of skill prompts and out of the markdown body, and review what a skill's body asks before letting it run.


Try With AI

If you completed all ten sections (and the optional Section 11), the sections themselves are your homework. Two prompts stretch you further:

Exercise 1: Override a Global skill from your workspace

Section 8 showed that workspace skills (tier 1) beat Global skills (tier 4) on name collisions. Make that real on disk.

Pick one skill currently installed at Global scope (~/.openclaw/skills/<name>/).
Copy its folder into <workspace>/skills/<name>/ inside the openclaw-employee/
folder, then edit the SKILL.md description in the workspace copy to say something
deliberately different (for example, prepend "WORKSPACE OVERRIDE:"). Run
openclaw skills list (the watcher reloads within ~250ms; no restart needed).
Send a matching message and confirm the modified description's skill loaded,
not the Global one. Then explain in one sentence why workspace beats Global,
using paths from your own machine.

What you're learning: The six tiers are not a theoretical hierarchy: they are a real override mechanism you will use whenever a project needs a sharper or different version of a Global skill. Workspace as the override layer is the safety property that lets you customize per-project without touching what every other project sees.

Exercise 2: Read a SKILL.md as the Gateway Reads It

Pick a skill you have not invoked yet from openclaw skills list.

Open the SKILL.md for <skill-name>. Show me only the name and
description fields. Without reading the body, predict three
real user messages that should trigger this skill, and three
that look related but should NOT trigger it. Then send all six
messages and check whether the skill activated. We are testing
whether the description matches what its author intended.

What you're learning: The description is the matcher. Reading it the way the gateway reads it teaches you how triggers work, which is the foundation for writing your own SKILL.md descriptions in Section 6.


When Emma came back, James had three terminal windows open and his phone in his hand. On the laptop, two different runtimes (OpenClaw on the left, Claude Code on the right) had each just answered the same question with the same structured output, both using a SKILL.md James had written himself an hour earlier. On the third terminal, skill_workshop list_pending showed an empty queue: a proposal from twenty minutes ago had already been approved and applied, and the resulting SKILL.md was sitting in workspace/skills/usd-to-pkr-conversion/. On his phone, his Employee had just converted a USD figure to PKR without him asking it to convert.

"I installed two skills from two different registries," James said. "skills.sh first because I wanted the cross-runtime story. Then ClawHub for an OpenClaw-curated one. I invoked them both as slash commands from my phone. Then I authored my own with skill-creator. The standup-notes skill I wrote runs in both my Employee and Claude Code." He looked up. "Same folder. Not a port. The same artifact."

"And Workshop?" Emma asked.

James held up his phone. "I told it 'from now on convert USD to PKR' once. Workshop heard the correction phrase, queued a proposal, my coding agent showed it to me, I approved it, and now the Employee just does it. I didn't write a config. I didn't write a skill. I corrected it once and the correction stuck."

He thought about it. "At my old warehouse, when we trained a temp worker, the way I knew they had really learned was when the corrections stopped. Not because they were tired of being corrected. Because they had internalized the rule. They started doing the right thing without me saying anything. That is what just happened to my Employee. I corrected it, the correction became a SKILL.md, and now it does the right thing on its own."

Emma was quiet for a moment. "That analogy is going in my notes. And honestly: the first time I taught this lesson, I skipped the cross-runtime story entirely. Showed students how to install from ClawHub, never had them install from skills.sh, never showed them the multi-select with 55 targets. Half of them walked out thinking skills were an OpenClaw feature. I was teaching the surface, not the spec."

She glanced at the right terminal. "You have skills installed from two registries, invoked as slash commands, authored from scratch, and now learning automatically from how you correct your Employee. But your agent still cannot reach anything outside its own brain. It cannot query a live database, hit a third-party API, or read a real calendar. Skills tell it what to do; they do not give it the wires."

"That is the next thing?" James asked.

"That is Lesson 7," Emma said. "External connections through MCP. Skills can call them, but the wires themselves are a separate layer."

Flashcards Study Aid