Plugins & Extensibility Exercises: Settings, Hooks, Plugins, and Automation
You've learned to configure settings across three scopes, create hooks that fire on events, discover and install plugins from the marketplace, and set up autonomous iteration loops. Each capability is powerful on its own. But the real leverage comes when they work together — settings that establish team standards, hooks that enforce them automatically, plugins that package everything for reuse, and Ralph Wiggum loops that iterate without babysitting.
These 15 exercises are designed to start as soon as you finish each lesson — not all at once at the end. Module 1 practices what you learned in Lesson 14 (Settings). Module 2 practices Lesson 15 (Hooks). You don't need to wait until you've finished all the extensibility lessons to begin. Each module targets one capability with two exercises: a hands-on exercise where you build something real, and a debug exercise where you fix something broken. Three capstones at the end combine everything into complete systems.
Download Plugins Exercises (ZIP)
After downloading, unzip the file. Each exercise has its own folder with an INSTRUCTIONS.md and any starter files you need.
If the download link doesn't work, visit the repository releases page directly.
How to Use These Exercises
Start each module right after finishing the corresponding lesson:
| After Lesson... | Do Module... |
|---|---|
| Lesson 14: Settings | Module 1: Settings Hierarchy |
| Lesson 15: Hooks | Module 2: Hooks |
| Lesson 16: Plugins | Modules 3-5: Discovery, Usage, Packaging |
| Lesson 17: Ralph Wiggum Loop | Module 7: Ralph Wiggum |
| All of the above | Module 8: Integration Capstones |
The workflow for every exercise is the same:
- Open the exercise folder from the
claude-code-plugins-exercises/directory - Read the INSTRUCTIONS.md inside the folder — it has setup steps and starter files
- Read the walkthrough below for context on what you're practicing and why
- Start Claude Code or Cowork and point it at the exercise folder
- Work through the exercise — write your own prompts, don't just copy the starter
- Reflect using the questions provided — this is where the real learning happens
Tool Guide
- Claude Code — Terminal-based, best for settings configuration, hook scripting, and plugin packaging
- Cowork — Desktop app, best for plugin evaluation and marketplace browsing
Most exercises work with either tool. Where one is clearly better, the exercise notes will say so.
The Extensibility Workflow
Use this for every exercise:
- Identify — What capability do you need? (settings, hooks, plugin, automation)
- Check — Does a plugin or marketplace already have it?
- Configure — Set up settings, hooks, or install the plugin
- Test — Verify it works in your workflow
- Package — Bundle for reuse if you built something custom
- Share — Distribute via marketplace if valuable to others
This workflow mirrors the real plugin ecosystem lifecycle: before building anything, check if it exists. After building something, package it so others don't have to rebuild it.
Module 1: Settings Hierarchy
Core Skill: Configuring Claude Code settings across user, project, and local scopes
Exercise 1.1 — Configure a Team Setup (Hands-on)
The Problem:
Open the module-1-settings-hierarchy/exercise-1.1-multi-scope-config/ folder. You'll find a simulated team project with three developers who have different preferences: one wants verbose output, one wants minimal output, and the third wants custom permissions. The project itself needs consistent behavior regardless of who's working on it.
Your Task:
Create settings files at all three levels (user, project, local) that satisfy everyone. The project-level .claude/settings.json should enforce team standards. Each developer's user-level settings should reflect their preferences. The local-level settings should override for a specific sensitive subdirectory that needs stricter permissions.
What You'll Learn:
- How the three-level hierarchy (user > project > local) resolves conflicting preferences
- Why project settings are the right scope for team standards
- That local overrides exist for exactly the cases where one directory needs different rules
Starter Prompt:
"Help me set up Claude Code settings for a team of 3. We need project-wide standards but each developer has different preferences."
Better Prompt (Build Toward This):
"Create three settings files: (1) a project-level .claude/settings.json that sets allowedTools to a safe default list and enables autoApproveReads, (2) a user-level ~/.claude/settings.json that sets my personal outputFormat preference, and (3) a local .claude/settings.local.json in the secrets/ subdirectory that disables all write tools. Show me what happens when a setting appears at multiple levels."
Reflection Questions:
- When a setting exists at both user and project level, which wins? Did the result match your prediction?
- What kinds of settings belong at project level vs. user level? What's your rule of thumb?
- If a new developer joins the team, what do they get automatically from project settings vs. what do they need to configure themselves?
Exercise 1.2 — Trace the Precedence Bug (Debug)
The Problem:
Open the module-1-settings-hierarchy/exercise-1.2-settings-conflict/ folder. You'll find a project where settings aren't behaving as expected. The developer configured allowedTools at the project level, but Claude keeps asking for permission on tools that should be auto-approved. Three settings files exist — but one of them has a subtle override that's causing the conflict.
Your Task: Read all three settings files. Trace through the precedence rules (local > project > user) to find which setting is winning and why. Fix the configuration so the intended behavior works. Document the precedence chain you traced.
What You'll Learn:
- How to systematically trace settings precedence when behavior doesn't match expectations
- That a single local override can silently defeat project-wide settings
- The debugging technique of reading all settings files before changing any
Reflection Questions:
- Which file contained the override? Was it obvious or did you have to trace the full chain?
- How would you prevent this class of bug in a real team project?
- What bash command would you run to quickly find all settings files in a project?
Module 2: Hooks
Core Skill: Creating event-driven automation that guarantees workflow rules
Exercise 2.1 — Build a Hook Suite (Hands-on)
The Problem:
Open the module-2-hooks/exercise-2.1-event-automation/ folder. You'll find a project that needs three automated checks: (1) every time Claude tries to write a file, check that it doesn't contain API keys, (2) every time Claude finishes running a bash command, log the command to an audit file, and (3) when a user submits a prompt, check that it doesn't reference files outside the project directory.
Your Task:
Create a .claude/settings.json with a hooks configuration containing three hooks: a PreToolUse hook for the write check, a PostToolUse hook for bash logging, and a UserPromptSubmit hook for the path check. Each hook needs the correct event, matcher, and command.
What You'll Learn:
- The difference between PreToolUse, PostToolUse, and UserPromptSubmit events
- How matchers filter which tool calls trigger a hook
- That hooks run shell commands — so anything bash can do, a hook can automate
Starter Prompt:
"Help me create hooks that prevent secrets from being written, log bash commands, and validate prompt paths."
Better Prompt (Build Toward This):
"Create a .claude/settings.json with three hooks: (1) a PreToolUse hook with matcher Write|Edit that runs a script checking the file content for patterns like API_KEY, SECRET, TOKEN — if found, exit 2 to block the write; (2) a PostToolUse hook with matcher Bash that appends the command to audit.log with a timestamp; (3) a UserPromptSubmit hook that checks if the prompt references paths outside $PWD and warns if so. Create the hook scripts in .claude/hooks/."
Reflection Questions:
- What happens when a PreToolUse hook exits with code 2 vs. code 0? Did you test both paths?
- Why is bash logging a PostToolUse hook rather than PreToolUse? What would change if you swapped them?
- Could you add a hook that runs on every Claude response? What would you use it for?
Exercise 2.2 — Fix the Broken Hooks (Debug)
The Problem:
Open the module-2-hooks/exercise-2.2-hook-failures/ folder. You'll find a settings.json with three hooks that are all broken. Bug 1: A PreToolUse hook fires on every tool call instead of just file writes. Bug 2: A PostToolUse hook script exists but has wrong permissions and never executes. Bug 3: A UserPromptSubmit hook references an environment variable that doesn't exist, causing silent failures.
Your Task: Diagnose and fix all three bugs. For each bug, document: what the symptom was, what the root cause was, and what you changed to fix it. Run the hooks after fixing to verify they work.
What You'll Learn:
- The most common hook configuration mistakes (missing matchers, wrong permissions, missing env vars)
- How to debug hooks by checking exit codes, permissions, and environment
- That silent failures are the hardest bugs — a hook that doesn't fire looks the same as a hook that doesn't exist
Reflection Questions:
- Which bug was hardest to find? How did you diagnose it?
- What's the fastest way to verify a hook is actually firing? What would you add to help debugging?
- If you were writing a "hooks debugging checklist," what would the first 3 items be?
Module 3: Plugin Discovery
Core Skill: Finding and evaluating plugins from the marketplace
Exercise 3.1 — Build a Plugin Evaluation Matrix (Hands-on)
The Problem:
Open the module-3-plugin-discovery/exercise-3.1-marketplace-explorer/ folder. You'll find workflow-needs.md — a document describing 5 workflow needs for a development team: automated commit messages, code review checklists, test coverage reporting, documentation generation, and dependency auditing.
Your Task:
For each workflow need, search the Claude Code plugin marketplace (using /plugin or the Discover tab) and find at least one plugin candidate. Create an evaluation matrix comparing each plugin on: functionality match, installation complexity, maintenance burden, and team fit. Recommend which plugins to install and which needs are better served by custom skills.
The plugin marketplace is actively evolving. If you find fewer plugins than expected, that's normal — evaluate what's available and note gaps where custom skills would fill the need. The evaluation process matters more than finding a perfect match for every category.
What You'll Learn:
- How to systematically evaluate plugins instead of installing the first one you find
- The tradeoff between using an existing plugin vs. building a custom skill
- That marketplace plugins vary in quality — evaluation criteria matter
Starter Prompt:
"Help me find plugins for these 5 workflow needs and create a comparison matrix."
Better Prompt (Build Toward This):
"For each need in workflow-needs.md, search the plugin marketplace and evaluate candidates on 4 criteria: (1) Does it solve the need fully or partially? (2) How many steps to install and configure? (3) Does it require ongoing maintenance? (4) Does it fit a 3-person team workflow? Create a markdown table with scores 1-5 for each criterion. For needs where no plugin scores above 3 overall, recommend building a custom skill instead."
Reflection Questions:
- How many of the 5 needs had good plugin matches? How many are better served by custom skills?
- What evaluation criteria did you use that weren't in the starter prompt? Were they important?
- If the marketplace had 10x more plugins, would your evaluation process change?
Exercise 3.2 — Audit a Messy Plugin Setup (Debug)
The Problem:
Open the module-3-plugin-discovery/exercise-3.2-plugin-audit/ folder. You'll find a project with 8 plugins installed — but the setup is a mess. Two plugins do the same thing (redundant), one plugin conflicts with a project hook, one is installed at the wrong scope (user vs. project), and one hasn't been updated in months. The developer doesn't know which plugins are actually useful.
Your Task: Audit the entire plugin setup. For each plugin, determine: what it does, whether it's redundant, whether it conflicts with anything else, and whether it's at the right scope. Produce a cleanup recommendation: which to keep, which to remove, and which to reconfigure.
What You'll Learn:
- How plugin sprawl creates maintenance burden and conflicts
- The technique of auditing installed plugins against actual usage
- That fewer well-configured plugins beat many poorly-configured ones
Reflection Questions:
- How many of the 8 plugins were actually providing value? How many were dead weight?
- What caused the plugin-hook conflict? How would you prevent it during installation?
- What periodic audit process would you set up to prevent plugin sprawl?
Module 4: Plugin Usage
Core Skill: Installing and using plugins effectively in real workflows
Exercise 4.1 — Install and Use Plugins (Hands-on)
The Problem:
Open the module-4-plugin-usage/exercise-4.1-install-and-use/ folder. You'll find a project that needs two plugins: one for automated commit messages (like commit-commands) and one for code quality checks (like an LSP integration plugin). The project has a README.md explaining the team's commit message conventions and quality standards.
Your Task: Install both plugins from the marketplace. Configure each one to match the team's conventions — the commit message plugin should follow the conventional commits format described in the README, and the quality plugin should check the standards listed there. Run both plugins on the sample code and verify they produce correct output.
Some plugins require external tools (Node.js, npm, language servers). If a plugin installation fails, check its dependencies first. The exercise README lists what you need installed.
What You'll Learn:
- The full install-configure-verify cycle for marketplace plugins
- How to customize plugin behavior to match team conventions
- That installation is only half the work — configuration determines whether a plugin helps or hurts
Starter Prompt:
"Install commit-commands and an LSP plugin, then configure them for our team conventions."
Better Prompt (Build Toward This):
"Read README.md for our team conventions. Then: (1) Install the commit-commands plugin with /plugin install commit-commands. (2) Configure it to use the conventional commits format from the README — prefixes should be feat:, fix:, docs:, refactor:. (3) Install an LSP-based quality plugin. (4) Test both: make a small code change, use the commit plugin to generate a message, and run the quality check. Show me the output from both."
Reflection Questions:
- Did the plugins produce useful output out of the box, or was configuration required?
- How would you share these plugin configurations with the rest of the team?
- What would happen if two team members had different plugin configurations for the same tool?
Exercise 4.2 — Chain Plugins in a Workflow (Hands-on)
The Problem:
Open the module-4-plugin-usage/exercise-4.2-plugin-workflow-chain/ folder. You'll find a development project where a typical workflow involves: write code, lint it, run tests, generate a commit message, and update the changelog. Each step could be handled by a separate plugin or hook.
Your Task: Set up a workflow that chains multiple plugins together. When you finish writing code: (1) a hook triggers the linter, (2) if linting passes, another hook runs the tests, (3) if tests pass, a plugin generates the commit message, and (4) a final plugin updates the changelog. Configure the full chain and run it end-to-end.
What You'll Learn:
- How to compose plugins and hooks into multi-step workflows
- The importance of failure handling — what happens when step 2 fails?
- That workflow chains need clear sequencing and exit-on-failure semantics
Starter Prompt:
"Set up a workflow that lints, tests, commits, and updates the changelog automatically."
Better Prompt (Build Toward This):
"I want an automated workflow chain: (1) Add a PostToolUse hook for Write|Edit that runs npm run lint on the changed file. If lint fails (exit code != 0), block further actions. (2) Add another hook that runs npm test after successful lint. (3) Use the commit-commands plugin to generate a commit message. (4) Use a changelog plugin to append the change. Chain these so each step only runs if the previous one succeeded. Test with a deliberate lint failure to verify the chain stops."
Reflection Questions:
- What happened when you introduced a deliberate failure at step 2? Did the chain stop cleanly?
- Is this workflow too automated or not automated enough? Where would you add human checkpoints?
- How would you debug this chain if it silently skipped a step?
Module 5: Plugin Packaging
Core Skill: Packaging your own capabilities as distributable plugins
Exercise 5.1 — Package Skills into a Plugin (Hands-on)
The Problem:
Open the module-5-plugin-packaging/exercise-5.1-package-skills-plugin/ folder. You'll find three standalone skills in .claude/skills/: a code review skill, a documentation generator skill, and a test scaffolding skill. Each works independently, but they're not packaged as a plugin — they can't be installed by other projects or shared with teammates.
Your Task:
Package all three skills into a single plugin. Create a plugin.json manifest with the correct structure: plugin name, version, description, author, and the list of included skills. Verify the plugin structure matches what the marketplace expects. Test by removing the original skills, installing from your plugin, and confirming all three skills still work.
What You'll Learn:
- The anatomy of a
plugin.jsonmanifest - How skills map to plugin components
- The test-by-removal technique: remove originals, install from plugin, verify nothing broke
Starter Prompt:
"Package these 3 skills into a plugin with a proper plugin.json manifest."
Better Prompt (Build Toward This):
"Read the 3 skills in .claude/skills/. Create a plugin.json with: name dev-toolkit, version 1.0.0, description that summarizes all 3 capabilities, and a skills array pointing to each skill's SKILL.md. Follow the official plugin manifest schema. After creating the manifest, test it: move the original skills to a backup folder, install the plugin locally, and verify all 3 skills work by invoking each one."
Reflection Questions:
- What metadata did the plugin.json require that wasn't in the individual skills?
- Did all three skills work after being installed from the plugin? If not, what broke?
- What would you need to add to make this plugin installable by someone outside your team?
Exercise 5.2 — Package a Complete Bundle (Advanced)
The Problem:
Open the module-5-plugin-packaging/exercise-5.2-full-plugin-bundle/ folder. You'll find a more complex setup: skills, hooks, an MCP server configuration, and an agent definition. These work together as a system — the hooks enforce standards, the skills automate tasks, the MCP server provides external data, and the agent orchestrates everything.
Your Task: Package the entire bundle into a single plugin that, when installed, sets up everything: all skills, hooks, MCP configuration, and agent definitions. The plugin should be self-contained — installing it on a fresh project should give the full system with no manual configuration.
What You'll Learn:
- How plugins can bundle more than just skills — hooks, MCP, and agents are all packageable
- The challenge of packaging configuration that depends on external services (MCP servers)
- That "self-contained" means thinking about dependencies and defaults
Starter Prompt:
"Package everything in this project (skills, hooks, MCP config, agents) into a single distributable plugin."
Better Prompt (Build Toward This):
"Create a complete plugin bundle: (1) Add all skills from .claude/skills/ to the plugin manifest. (2) Include the hooks from settings.json in a hooks section. (3) Include the MCP server config with a note about required environment variables. (4) Include the agent definition from .claude/agents/. (5) Add a README.md in the plugin directory explaining setup steps for the MCP dependency. Test by cloning to a new directory and installing fresh."
Reflection Questions:
- What was the hardest part to package — skills, hooks, MCP, or agents? Why?
- How did you handle the MCP server's external dependency? Is your solution portable?
- Would a new user be able to install and use this plugin without contacting you? What documentation would help?
Module 7: Ralph Wiggum
Core Skill: Configuring safe autonomous iteration loops
Exercise 7.1 — Set Up an Autonomous Loop (Hands-on)
The Problem:
Open the module-7-ralph-wiggum/exercise-7.1-autonomous-loop/ folder. You'll find a project with 12 test files, all failing. The failures are independent — each test has a small bug in the corresponding source file. Fixing them one at a time with manual prompts would require 12 separate interactions. This is a classic Ralph Wiggum candidate: clear completion criteria (all tests pass), many iterations, and each fix is independent.
Your Task: Set up a Ralph Wiggum autonomous loop. Install the plugin if needed. Write a prompt with an embedded completion promise: "Keep iterating until all 12 tests pass. After each fix, run the test suite. Stop when all tests are green or after 15 iterations, whichever comes first." Run the loop and observe how it progresses.
Before starting, run cd project && npm install && npm test to confirm you see 12 failing tests. If the test runner isn't installed or errors look different than expected, resolve the setup first — the loop depends on consistent test output.
What You'll Learn:
- How to write a completion promise that's objective and verifiable
- Why iteration limits prevent runaway loops (and how to choose a good limit)
- The difference between tasks that are good Ralph Wiggum candidates and tasks that aren't
Starter Prompt:
"Fix all the failing tests in this project. Keep going until they all pass."
Better Prompt (Build Toward This):
"All 12 tests are currently failing due to bugs in the source files. Fix them autonomously: (1) Run npm test to see current failures. (2) Pick the simplest failing test. (3) Read the test to understand expected behavior. (4) Fix the corresponding source file. (5) Run npm test to verify the fix and check remaining failures. (6) Repeat until all 12 tests pass. Stop after 15 iterations maximum. After each fix, report: which test you fixed, what the bug was, and how many tests remain."
Reflection Questions:
- How many iterations did the loop take? Did it complete within the 15-iteration limit?
- Did any fix break a previously passing test? How would you prevent regression in autonomous loops?
- Would this task work well without Ralph Wiggum? What makes it a good vs. bad candidate for autonomous iteration?
Exercise 7.2 — Debug a Stuck Loop (Debug)
The Problem:
Open the module-7-ralph-wiggum/exercise-7.2-stuck-loop-debug/ folder. You'll find a session log from a Ralph Wiggum loop that got stuck. The developer asked Claude to "refactor all functions to use async/await and keep going until the linter is clean." The loop ran 20 iterations, never reached clean linter output, and burned through significant tokens. The log shows the same 3 linter errors appearing repeatedly — Claude fixes one, introduces another, and cycles endlessly.
Your Task: Analyze the session log to identify why the loop got stuck. Determine: (1) Was the completion promise achievable? (2) Was the iteration limit appropriate? (3) What caused the cyclic behavior? Then rewrite the prompt with a completion promise and constraints that would have prevented the infinite loop.
What You'll Learn:
- Why vague completion promises ("until the linter is clean") can be unachievable
- How cyclic fixes indicate a fundamentally flawed loop design
- The importance of progress checks — if no progress after N iterations, stop and reassess
Reflection Questions:
- At what point should the loop have stopped? What signal indicated it was stuck?
- How would you rewrite the completion promise to prevent cycling? What makes a promise "achievable"?
- What's a reasonable cost budget for an autonomous loop? How would you enforce it?
Module 8: Integration Capstones
Choose one (or more). These combine multiple extensibility features — no starter prompts provided.
Capstones are different from the exercises above. There are no guided prompts — you design the entire approach yourself. Each project requires combining settings, hooks, plugins, and automation into a complete system.
Capstone A — Full Plugin from Scratch
Open the module-8-capstones/capstone-A-full-plugin/ folder. You'll find a set of requirements for a plugin that doesn't exist yet: it should include 2 skills (one for API documentation generation, one for endpoint testing), 2 hooks (pre-commit validation and post-test reporting), and an MCP server integration for an external API. Build the entire plugin from scratch: skills, hooks, MCP config, and plugin.json manifest. Then create a marketplace.json that registers your plugin — making it discoverable and installable by others. Test by installing it on a fresh project from your marketplace.
What You'll Learn:
- How to design a plugin from requirements, not from existing code
- The full build-test-package-distribute cycle end to end, including marketplace creation
- That a well-designed plugin saves hours for every developer who installs it
Capstone B — Team Extensibility Kit
Open the module-8-capstones/capstone-B-team-extensibility-kit/ folder. You'll find profiles for a 3-person development team with different roles (frontend, backend, DevOps). Design a complete extensibility kit: project-level settings that enforce shared standards, user-level settings templates for each role, hooks for code quality and security, plugins for each role's specific workflow, and a CLAUDE.md that documents everything. The kit should be installable on a new project in under 5 minutes.
What You'll Learn:
- How to design extensibility for a team, not just an individual
- The balance between shared standards (project settings) and personal preferences (user settings)
- That extensibility systems need documentation as much as configuration
Capstone C — Your Workflow
Open the module-8-capstones/capstone-C-your-workflow/ folder for a self-assessment template. Audit your own Claude Code setup: what settings do you have? What hooks are running? What plugins are installed? What's missing? Then improve your setup by adding at least one item from each category: a setting you should have configured, a hook that would catch a mistake you've made before, and a plugin that would speed up your most common task. Document before and after.
What Makes This Special: Unlike Capstones A and B, this one has real stakes. The changes you make apply to YOUR actual workflow. Most developers discover they're using less than 20% of Claude Code's extensibility features — this exercise closes that gap.
What You'll Learn:
- Which extensibility features you've been ignoring and why
- That a small investment in hooks and plugins pays compound returns
- How to build a personal improvement plan for your Claude Code setup
Assessment Rubric
After completing the exercises, evaluate yourself on each dimension:
| Criteria | Beginner (1) | Developing (2) | Proficient (3) | Advanced (4) |
|---|---|---|---|---|
| Configuration Accuracy | Settings don't take effect | Settings work but wrong scope | Correct scope and precedence | Optimized for team collaboration |
| Hook Design | Hooks don't fire | Hooks fire but wrong behavior | Correct events, matchers, and actions | Hooks compose into complete workflow |
| Plugin Competence | Can't install plugins | Installs but doesn't configure | Full lifecycle (install, use, package) | Creates distributable marketplace plugins |
| Debug Skill | Can't identify issue | Finds issue but wrong fix | Correct diagnosis and fix | Prevents class of issues proactively |
| Automation Quality | Loop doesn't run | Loop runs but never completes | Correct completion promise and limits | Cost-aware, safe, production-ready |
Target: Proficient (3) across all dimensions by Module 7. Advanced (4) is demonstrated through capstone completion.
What's Next
You've practiced the three core skills — configuration management, plugin lifecycle, and automation design — across 15 exercises. These skills compound: every exercise builds intuition for when to configure a setting, when to write a hook, when to install a plugin, and when to let Ralph Wiggum iterate autonomously. The extensibility features you practiced aren't extras — they're how professionals scale their Claude Code workflows from individual productivity to team-wide systems. Next in Lesson 32: Chapter Quiz, you'll test your understanding of all Chapter 3 concepts including settings, hooks, plugins, and autonomous iteration. This lesson complements the existing Lesson 6 (basics exercises) and Lesson 10 (skills exercises) — together they cover the full arc from problem-solving fundamentals to extensibility mastery.