Practice: File Processing Exercises
You've learned six powerful file processing workflows. You can survey folders, create safety backups, design organization rules, run batch operations, recover from mistakes, and search by description. That's real capability — but knowing the workflows and executing them under pressure are different things. The gap between understanding a workflow and applying it to 80 messy files with wrong extensions, duplicate names, and missing metadata is where most people stall.
These 13 exercises close the gap between understanding and fluency. Each module gives you two exercises: a Build exercise where you apply a workflow to real files, and a Debug exercise where you diagnose what went wrong when someone else did it poorly. Three skills run through every exercise: file workflow execution (applying survey, backup, organization, batch, recovery, and search workflows), systematic debugging (diagnosing errors by comparing expected vs. actual file state), and pipeline design (combining workflows into end-to-end systems).
Every exercise uses real starter files — actual folders with 20-100 files you'll survey, backup, organize, rename, recover, and search. This isn't hypothetical. By the end, you'll have practiced every workflow on messy, realistic data.
Download File Processing Exercises (ZIP)
After downloading, unzip the file. Each exercise has its own folder with an INSTRUCTIONS.md and starter files you need.
If the download link doesn't work, visit the repository releases page directly.
How to Use These Exercises
The workflow for every exercise is the same:
- Open the exercise folder from the
claude-code-file-processing-exercises/directory - Read the INSTRUCTIONS.md inside the folder — it describes the scenario and starter files
- Read the walkthrough below for context on what you're practicing and why
- Start Claude Code and point it at the exercise folder
- Work through the exercise — write your own prompts (use starters only if stuck)
- Reflect using the questions provided — this is where the real learning happens
You don't need to complete all 13 in one sitting. Work through one module at a time. Each module builds on the workflows from specific chapter lessons.
Tool Guide
- Claude Code — Required for all exercises. File processing is terminal work: surveying directories, copying files, renaming batches, verifying results. Claude Code runs these operations directly.
- Cowork — Can be used for Exercise 3.2 (analyzing organization rules) and capstone planning where you're designing systems on paper before executing. But Claude Code is strongly preferred since every exercise involves manipulating real files.
Key Differences from Chapter Lessons
In Lessons 0-7, you learned each workflow in isolation with guided walkthroughs. These exercises are different in three ways:
- No step-by-step instructions. The exercises describe the scenario and the goal. You decide the approach, choose the commands, and handle edge cases yourself.
- Build + Debug pairing. Every module has a Build exercise (apply the workflow) and a Debug exercise (diagnose someone else's mistakes). Debugging someone else's work develops different skills than doing the work yourself — you learn to read logs, compare expected vs. actual state, and trace root causes.
- Increasing independence. Modules 1-2 provide starter prompts to scaffold your learning. Modules 3-5 remove the scaffolding. Capstones remove everything — you design the entire approach.
By Module 5, you should be able to face a new file processing problem and instinctively reach for the right workflow without needing to review the chapter lessons.
The File Processing Framework
Use this for every exercise:
- Survey — What files exist? How many? How big? What types?
- Backup — Create a safety net before any changes
- Plan — Design rules and approach before executing
- Test — Try on ONE file first
- Execute — Run the batch operation
- Verify — Compare results against expectations
- Document — Save rules, logs, and templates for reuse
This framework applies to every file management task, not just these exercises. Whether you're reorganizing a photo library, cleaning up a code repository, or migrating data between systems, these seven steps prevent the mistakes that turn a 10-minute task into a 2-hour recovery operation. Notice that steps 1-3 happen before any files change. That's intentional — most file disasters come from skipping preparation.
Assessment Rubric
For each exercise, evaluate yourself on:
| Criteria | Beginner (1) | Developing (2) | Proficient (3) | Advanced (4) |
|---|---|---|---|---|
| Investigation Quality | Runs ls once | Surveys files and sizes | Full inventory with types, sizes, dates | Discovers hidden files, symlinks, edge cases |
| Safety Practices | No backup | Copies some files | Complete backup with verification | Timestamped backup + integrity check + rollback plan |
| Workflow Execution | Random commands | Follows steps but skips verification | Complete workflow with verification | Adapts workflow to edge cases dynamically |
| Problem Diagnosis | Guesses at issues | Identifies obvious problems | Traces root cause systematically | Identifies root cause AND prevents recurrence |
| Documentation | No record | Basic notes | Reusable rules and templates | Complete system with rules + logs + recovery plan |
Module 1: File Investigation
Core Skill: Understanding what you have before changing anything (Lessons 0-1)
Exercise 1.1 — The Project Handoff (Build)
The Problem:
Open the module-1-file-investigation/exercise-1.1-project-handoff/ folder. You'll find messy-project/ — a folder with 40+ files across nested directories, no documentation, and a mix of source code, data files, images, and config files. A colleague just left the company and handed you this project folder with the message "figure out what's in here." No README, no explanation, no file manifest. You need to understand the project well enough to brief your team by end of day.
Your Task:
Survey the entire folder structure and create a comprehensive FILE-INVENTORY.md that documents everything: total file count, file types with counts, directory tree, largest files, newest and oldest files, any hidden files or dotfiles, and a summary of what this project appears to be. Someone reading your inventory should understand the folder's contents without opening a single file.
What You'll Learn:
- How to systematically survey an unfamiliar folder instead of randomly clicking through files
- Which bash commands (
find,du,ls -la,file,wc) reveal the most about a folder's contents - That a thorough 5-minute survey prevents the "I didn't know that file existed" surprises that derail workflows later
Starter Prompt (Intentionally Vague):
"What's in this folder?"
Better Prompt (Build Toward This):
After exploring with ls -laR, find . -type f | wc -l, and du -sh */, try: "Survey this entire folder and create FILE-INVENTORY.md with: (1) total file count and directory count, (2) file type breakdown with counts, (3) full directory tree, (4) top 5 largest files with sizes, (5) newest and oldest files by modification date, (6) any hidden files or dotfiles, (7) a 2-sentence summary of what this project appears to be."
Reflection Questions:
- How many files did your initial
lsmiss that a deeper survey withfindrevealed? - What surprised you most about the folder's contents? Would you have discovered it without a systematic survey?
- How long did the full survey take? Compare that to how long you'd spend recovering from a missed file causing problems mid-workflow.
Exercise 1.2 — The Lying Inventory (Debug)
The Problem:
Open the module-1-file-investigation/exercise-1.2-lying-inventory/ folder. You'll find two things: wrong-inventory.md — a file inventory someone created — and actual-files/ — the folder it supposedly documents. The inventory has errors. Wrong file counts, missing files, incorrect sizes, missed hidden files, and at least one file listed that doesn't actually exist.
Your Task:
Systematically compare the inventory against the actual folder. Find every discrepancy. For each error, document: what the inventory claims, what's actually true, and what command proves it. Create a CORRECTIONS.md listing every error with evidence.
What You'll Learn:
- How to verify someone else's work by comparing claims against ground truth
- That inventories go stale the moment files change — verification is not optional
- The specific commands (
diff,comm,find,stat) that catch discrepancies between documentation and reality
Starter Prompt (Intentionally Vague):
"Check if this inventory is correct."
Better Prompt (Build Toward This):
After running find actual-files/ -type f | wc -l and comparing to the inventory's claimed count: "Compare wrong-inventory.md against the actual-files/ directory. For every discrepancy, document: (1) what the inventory claims, (2) what's actually true, (3) the bash command that proves the difference. Check file counts, file sizes, hidden files, directory structure, and any files listed in the inventory that don't exist. Output everything to CORRECTIONS.md."
Reflection Questions:
- How many errors did you find? Which category of error was most common (wrong counts, missing files, wrong sizes)?
- Which error would have caused the most damage if you'd trusted the inventory and acted on it?
- What verification steps would you add to your own inventory-creation workflow to prevent these errors?
Module 2: Safety-First Backup
Core Skill: Creating safety nets before any destructive operation (Lesson 2)
Exercise 2.1 — The Migration Prep (Build)
The Problem:
Open the module-2-safety-first/exercise-2.1-migration-prep/ folder. You'll find migration-source/ — a folder containing project files that need to be migrated to a new structure. The folder has symlinks, files with special characters in names, a mix of permissions, some large binary files, and nested directories 4 levels deep. Before you migrate anything, you need a complete, verified backup.
Your Task:
Create a comprehensive backup of migration-source/ that handles all edge cases: symlinks (follow or preserve?), special characters, permissions, large files, and deep nesting. Verify the backup is complete by comparing file counts, total sizes, and spot-checking specific files. Document your backup strategy in BACKUP-LOG.md.
What You'll Learn:
- That
cp -ralone misses symlinks, permissions, and special characters — real backups needrsyncor carefulcpflags - How to verify backup completeness beyond "the folder exists" (compare counts, sizes, checksums)
- Why documenting your backup strategy matters: if the migration fails, you need to know exactly what you backed up and how
Starter Prompt (Intentionally Vague):
"Back up this folder before I migrate it."
Better Prompt (Build Toward This):
After running find migration-source/ -type l and find migration-source/ -name '*[[:space:]]*' to discover edge cases: "Create a backup of migration-source/ to backup-YYYY-MM-DD/. Handle: (1) symlinks — preserve them as symlinks, don't follow, (2) files with spaces and special characters, (3) preserve file permissions, (4) nested directories to any depth. After copying, verify: file count matches, total size matches, spot-check 3 specific files by comparing checksums. Document everything in BACKUP-LOG.md."
Reflection Questions:
- What edge cases did you discover during the backup that a simple
cp -rwould have missed? - How did you verify completeness? Would your verification catch a file that was copied but corrupted?
- If the migration fails in 3 months, could someone use your BACKUP-LOG.md to restore everything? What information would they need that you didn't include?
Exercise 2.2 — The Incomplete Backup (Debug)
The Problem:
Open the module-2-safety-first/exercise-2.2-incomplete-backup/ folder. You'll find original/ (the source folder with 30+ files) and backup/ (a backup someone made). The backup is missing 6 files. Your job is to figure out why each file is missing — not just that it's missing, but what caused the omission.
Your Task:
Compare backup/ against original/ and identify all 6 missing files. For each one, determine the likely cause: Was it a hidden file that cp skipped? A symlink that wasn't followed? A file created after the backup timestamp? A file in a subdirectory that wasn't recursively copied? Document each discrepancy with its probable cause in DIAGNOSIS.md.
What You'll Learn:
- The six most common reasons backups are incomplete (hidden files, symlinks, permissions, timestamps, recursive depth, filename encoding)
- How to use
diff <(find original/) <(find backup/)to systematically find missing files - That diagnosing why a backup failed is more valuable than just noticing it failed — the cause tells you how to prevent it next time
Starter Prompt (Intentionally Vague):
"Is this backup complete?"
Better Prompt (Build Toward This):
After running diff <(cd original && find . | sort) <(cd backup && find . | sort) to see the differences: "Compare backup/ against original/. For each missing file, determine: (1) the file path, (2) the file type (regular, hidden, symlink, etc.), (3) the probable cause it was missed (hidden file, symlink not followed, permission denied, not recursive, created after backup). Output findings to DIAGNOSIS.md with evidence commands."
Reflection Questions:
- Which of the 6 missing files would have caused the biggest problem if you'd relied on this backup during a migration?
- Did any of the missing files share a common cause? What single change to the backup command would have caught the most files?
- How would you modify your backup workflow from Exercise 2.1 to prevent every cause you discovered here?
Module 3: Organization Rules
Core Skill: Designing categorization systems through collaborative refinement (Lesson 3)
Exercise 3.1 — The Freelancer's Chaos (Build)
The Problem:
Open the module-3-organization-rules/exercise-3.1-freelancer-chaos/ folder. You'll find freelancer-files/ — a flat folder with 60+ files from a freelancer's working directory. There are client deliverables mixed with personal files, invoices mixed with project assets, drafts mixed with finals, and file names that follow no consistent pattern. Some are invoice_march.pdf, others are ACME_logo_v3_FINAL_FINAL.png, others are notes.txt. The freelancer has 3 active clients and needs to be able to find any client's files within 30 seconds.
Your Task:
Design organization rules that sort these files into a logical folder structure. Don't just move files — first write a rules.md that defines your categories, naming conventions, and how to handle ambiguous files. Then apply the rules. Then verify the result: every file should be in exactly one category, no file should be lost, and someone unfamiliar with the project should understand where to find things.
What You'll Learn:
- That organization rules must handle ambiguity: what do you do with a file that fits two categories?
- How iterative refinement with Claude produces better rules than trying to design the perfect system upfront
- The difference between organizing for yourself (you know what "notes.txt" means) and organizing for others (nobody else does)
Reflection Questions:
- How many files didn't fit cleanly into your initial categories? What did you do with them?
- Did Claude suggest categories you hadn't considered? Did you push back on any of its suggestions?
- If this freelancer adds 20 new files next month, would your rules handle them without modification?
Exercise 3.2 — The Collision Course (Debug)
The Problem:
Open the module-3-organization-rules/exercise-3.2-collision-course/ folder. You'll find broken-rules.md — an organization system someone designed — and unsorted-files/ — a batch of files waiting to be organized. The rules have problems: overlapping categories (a file could go in two places), gaps (some file types aren't covered), ambiguous criteria (what counts as "important"?), and contradictory instructions.
Your Task:
Analyze broken-rules.md and identify every flaw: overlaps, gaps, ambiguities, and contradictions. Fix each one with a clear rationale. Then apply your corrected rules to unsorted-files/ and verify the result.
What You'll Learn:
- How to audit organization rules for completeness and consistency
- That rules which look logical on paper often fail when applied to real files with messy names and mixed purposes
- The specific failure modes of categorization systems: overlap, gaps, ambiguity, and contradiction
The Twist: After fixing the rules and organizing the files, ask Claude to generate 5 hypothetical file names that your corrected rules STILL wouldn't handle cleanly. Fix your rules again to accommodate them. This reveals whether your rules are truly robust or just happened to work on the test set.
Reflection Questions:
- Which flaw was hardest to spot by reading the rules alone? Did it only become obvious when you tried to apply them?
- How many of the 5 hypothetical files exposed new gaps? Were the gaps in your categories, your naming criteria, or your ambiguity-resolution rules?
- At what point do organization rules become too complex to follow? Where's the line between thorough and over-engineered?
Module 4: Batch Operations
Core Skill: Transforming repetitive file tasks into systematic batch workflows (Lesson 4)
Exercise 4.1 — Photo Library Cleanup (Build)
The Problem:
Open the module-4-batch-operations/exercise-4.1-photo-library/ folder. You'll find photo-dump/ — 50+ image files with inconsistent naming from three different sources: a phone camera (IMG_20240315_142355.jpg), a DSLR (DSC_0042.NEF), screenshots (Screenshot 2024-03-15 at 2.23.55 PM.png), and downloads (photo.png, vacation (copy 2).JPEG). File extensions are mixed case. Some files have EXIF dates, some don't. There are duplicates with slightly different names pointing to the same image.
Your Task:
Design a batch rename operation that gives every file a consistent name format: YYYY-MM-DD_description_NNN.ext (lowercase extension). Handle: files without dates (use file modification date), duplicates (append sequential numbers), and missing descriptions (use "unnamed"). Test on 3 files first, then run the full batch. Verify no files were lost by comparing counts before and after.
What You'll Learn:
- That batch rename is deceptively complex: edge cases (duplicates, missing metadata, encoding issues) multiply fast
- Why testing on a small batch first catches problems that would be catastrophic at full scale
- How to design rename rules that handle messy real-world data, not just clean test cases
Key Edge Cases to Watch For:
- Two photos taken in the same second (collision handling)
- Files with no date in the name AND no EXIF data (what's the fallback?)
- Mixed case extensions:
.JPGvs.jpgvs.jpeg(are these the same type?) - Files with parentheses, spaces, or unicode characters in names
Reflection Questions:
- How many edge cases did you discover during the 3-file test that would have caused problems in the full batch?
- What was your strategy for files without EXIF dates? How confident are you in the fallback?
- If you needed to undo this rename, could you? What would you need to have saved?
Exercise 4.2 — The Rename Disaster (Debug)
The Problem:
Open the module-4-batch-operations/exercise-4.2-rename-disaster/ folder. You'll find mangled-files/ — files after a botched batch rename — and rename-log.txt — a log of every rename operation that was executed. The rename script had bugs: some files were overwritten (duplicates mapped to the same target name), some extensions were corrupted, and some files ended up with garbled names from encoding issues.
Your Task:
Use rename-log.txt to reconstruct what happened. Identify every bug in the original rename logic. Determine which files can be recovered (the log has the original names) and which are permanently damaged (overwritten). Create a RECOVERY-PLAN.md documenting: what went wrong, which files are recoverable, and the recovery commands.
What You'll Learn:
- How rename logs enable recovery — and why every batch operation should produce one
- The three most common batch rename bugs: collision (two files map to same target), encoding corruption, and extension mangling
- That the cost of a botched batch operation is proportional to the number of files — which is exactly why you test on 3 first
The Challenge: After completing the recovery plan, write a corrected rename script (or prompt) that handles all the edge cases the original missed. Test it on the recovered files to prove it works. Compare your corrected version against the original log to verify every bug is addressed.
Reflection Questions:
- How many files were permanently lost due to overwrites? Could any of them be recovered from elsewhere?
- What single change to the original rename logic would have prevented the most damage?
- If the rename log didn't exist, how would your recovery approach change? What does this tell you about the value of logging?
Module 5: Recovery & Search
Core Skill: Recovering from disasters and finding needles in haystacks (Lessons 5-6)
Exercise 5.1 — The Accidental Flatten (Build)
The Problem:
Open the module-5-recovery-and-search/exercise-5.1-accidental-flatten/ folder. You'll find two directories: flattened/ — where all files from a nested project structure were accidentally moved into a single flat directory — and backup/ — an older backup that preserves the original nested structure. Someone ran a command that moved every file to the root level, destroying the folder hierarchy. The flattened directory has 40+ files that used to live in subdirectories like src/, tests/, docs/, and assets/. File names are intact, but all directory context is gone.
Your Task:
Reconstruct the original nested directory structure. Use backup/ as your reference for where files should go, but note that some files in flattened/ are newer than the backup (they were created after the backup was made). You'll need to decide where new files belong based on their type and naming patterns. Verify the reconstruction by comparing your result against the backup's structure.
What You'll Learn:
- How to use an older backup as a reference map without blindly restoring it (newer files exist)
- That recovery is a judgment call: you need rules for files that don't have an obvious home
- The workflow of recovery: assess damage, identify reference, plan reconstruction, execute, verify
Reflection Questions:
- How many files had a clear home based on the backup? How many required judgment about where they belonged?
- What was your strategy for new files that didn't exist in the backup? How confident are you in your placement?
- What would you do differently if no backup existed? What other clues could you use to reconstruct the structure?
Exercise 5.2 — The Tax Season Hunt (Search)
The Problem:
Open the module-5-recovery-and-search/exercise-5.2-tax-season-hunt/ folder. You'll find document-archive/ — a folder with 100+ files accumulated over 3 years of freelance work. Somewhere in this mess are 5 specific documents you need for tax filing. You don't know the exact file names — you only have descriptions:
- "The contract I signed with Acme Corp in Q2 2023"
- "The receipt for the laptop I bought for work"
- "My health insurance summary from last year"
- "The invoice where I charged the highest amount"
- "The spreadsheet where I tracked monthly expenses"
Your Task:
Find all 5 documents using only the descriptions above. For each one, document: the search strategy you used, the commands that narrowed it down, and the final file path. You cannot open every file — there are too many. You need to use metadata (dates, sizes, types), file names, and content search (grep) strategically.
What You'll Learn:
- How to translate vague human descriptions into specific search criteria (dates, keywords, file types)
- That different search strategies work for different types of descriptions (date-based vs. content-based vs. size-based)
- The power of combining search approaches:
findby date range +grepby keyword +fileby type
The Extension: After finding all 5 documents, create a SEARCH-CHEATSHEET.md documenting the search strategies that worked. For each strategy, include: when to use it, the command pattern, and an example. This cheatsheet becomes a reusable reference for future search tasks.
Reflection Questions:
- Which document was hardest to find? What made the description difficult to translate into search criteria?
- Which search strategy was most effective overall: searching by date, by name pattern, by content, or by file type?
- If this archive grows to 1,000 files, which of your search strategies would still work? Which would break down?
Module 6: Capstone Projects
Choose one (or more). This is where everything comes together — no starter prompts provided.
Capstones are different from the exercises above. There are no guided prompts — you design the entire approach yourself. Each project requires applying all six workflows together to solve a realistic problem. Where module exercises test individual skills, capstones test your ability to orchestrate those skills into a coherent pipeline. The quality of your documentation matters as much as the result — someone should be able to follow your process and reproduce it on a different folder.
Capstone A — The Full Pipeline
Open the module-6-capstone/capstone-A-full-pipeline/ folder. You'll find messy-downloads/ — an 80-file folder that simulates a real Downloads directory accumulated over months. There are PDFs, images, code files, spreadsheets, documents, archives, duplicates, and files with no clear purpose.
Take this folder through the complete File Processing Framework:
- Survey — Create a comprehensive inventory of what's in the folder
- Backup — Create a verified safety backup before touching anything
- Plan — Design organization rules with categories, naming conventions, and ambiguity resolution
- Test — Apply your rules to 5 files and verify the result
- Execute — Organize and rename the full batch
- Verify — Compare results against your plan; confirm no files lost
- Document — Create a complete record: rules, logs, before/after stats
What You'll Learn:
- How all six workflows connect into a single pipeline — each step's output feeds the next step's input
- That the full pipeline takes longer than any single workflow, but prevents the cascading failures that make individual steps fail
- The discipline of documenting as you go: your
rules.md,backup-log.md, andverification-report.mdbecome a reusable template
Deliverables:
FILE-INVENTORY.md— Complete survey of the 80-file folderbackup/— Verified safety backup with integrity checksrules.md— Organization categories, naming conventions, ambiguity handlingrename-log.txt— Record of every rename operationVERIFICATION-REPORT.md— Before/after comparison, file count confirmation, spot checks
Reflection Questions:
- Which step took the longest? Which step prevented the most potential problems?
- Did any step's output force you to revise a previous step? (For example, did executing the batch reveal a gap in your rules?)
- How would you adapt this pipeline for a different domain — say, organizing a music library or a code repository?
Capstone B — The Team File System
Open the module-6-capstone/capstone-B-team-file-system/ folder. You'll find team-scenario.md — a description of a 3-person team (a designer, a developer, and a project manager) who share a project folder and keep stepping on each other's files. The scenario describes their pain points: overwritten designs, lost meeting notes, conflicting versions.
Design a complete shared file system for this team:
- Folder structure — Where does each person's work live? Where do shared assets go?
- Naming conventions — How do files indicate owner, version, status (draft/review/final)?
- Organization rules — What goes where? How are conflicts resolved?
- Backup strategy — How often? What's backed up? Who's responsible?
- Recovery plan — When someone accidentally overwrites a file, what's the procedure?
- Search guide — How does each person find what they need quickly?
Create the complete folder structure, write the rules document, and verify the system handles all the scenarios described in team-scenario.md.
What You'll Learn:
- That file systems designed for teams require explicit conflict resolution rules that personal systems don't
- How to think about file management as a system design problem, not just a cleanup task
- The difference between organizing for one person (intuitive) and organizing for a team (explicit)
Deliverables:
- Complete folder structure (created as actual directories)
FILE-SYSTEM-RULES.md— Naming conventions, ownership, version control, conflict resolutionBACKUP-STRATEGY.md— Schedule, scope, responsibility assignmentsRECOVERY-PROCEDURES.md— Step-by-step instructions for common disaster scenariosQUICK-SEARCH-GUIDE.md— How each team member finds what they need
Reflection Questions:
- Which team member's workflow was hardest to accommodate? What compromises did you make?
- How did you handle the case where the designer and developer both need to modify the same asset file?
- Would your system scale to a 10-person team, or would it need fundamental redesign?
Capstone C — Your Own Files
Open the module-6-capstone/capstone-C-your-own-files/ folder for a self-assessment template. Then close it — this capstone uses YOUR actual files.
Pick a real folder on your computer — your Downloads, Desktop, Documents, or a project directory that's gotten messy. Apply all six workflows to your actual files:
- Survey your real folder — how bad is it?
- Backup everything before you touch it — real files, real stakes
- Design rules that fit how you actually work, not how you wish you worked
- Test on a handful of files first
- Execute the full organization
- Verify nothing was lost
- Document your rules so future-you can maintain the system
What Makes This Special: Unlike Capstones A and B, this one has real consequences. The files you're organizing are files you actually need. The rules you design must fit your real workflow. The backup must be genuinely complete because there's no "reset" button. This is where the File Processing Framework proves its value — or exposes gaps in your understanding.
Deliverables:
BEFORE-SNAPSHOT.md— Survey of your folder before any changesbackup/— Complete safety backup of your real filesMY-RULES.md— Organization rules tailored to your actual workflowAFTER-SNAPSHOT.md— Survey of your folder after organizationWHAT-I-LEARNED.md— What worked, what surprised you, what you'd do differently
Reflection Questions:
- Were your real files messier or cleaner than you expected? What surprised you during the survey?
- Did any of your organization rules from the earlier exercises fail when applied to your actual files? Why?
- Will you maintain this system going forward? What's the minimum maintenance effort to keep it organized?
What's Next
You've practiced all six file processing workflows across 13 exercises — surveying, backing up, organizing, batch processing, recovering, and searching. More importantly, you've practiced them on messy, realistic data where edge cases actually matter.
The File Processing Framework you've internalized (Survey, Backup, Plan, Test, Execute, Verify, Document) transfers to any domain where you direct General Agents to work with files, data, or any structured collection. Whether you're managing a photo library, migrating a codebase, or reorganizing a team's shared drive, the same seven steps prevent the same classes of mistakes.
These patterns become the foundation for automated workflows in later chapters, where the manual prompting you practiced here evolves into autonomous agent behavior. The rules you wrote, the logs you kept, and the verification habits you built are exactly what AI Employees need to operate independently — they just need those patterns encoded as instructions rather than typed as prompts.