Phase 4 — Debug & Master
Chapters 56-57: Debugging and TDG Independence
Your role: Debugger -- "I can diagnose failures and drive TDG without scaffolding"
You have spent three phases learning to read, specify, and verify. You can look at AI-generated code and predict what it does. You can write type-annotated function stubs that tell AI exactly what to build. You can write test suites that define "correct" before a single line of implementation exists. You can iterate on AI output, handle exceptions, and validate data at boundaries.
But there is one skill you have not practiced: what to do when it all goes wrong.
Not "the test fails and you re-prompt." You have done that. Phase 4 is about the harder cases: the traceback that points to a line you have never seen, the function that runs without errors but returns the wrong answer, the bug that passes pyright and pytest but produces nonsense in production. These are the failures that separate someone who uses AI from someone who works with AI.
Phase 4 is the shortest phase in Part 4. Two chapters. No new Python syntax. No new libraries. Just two skills:
-
Debugging (Ch 56): Reading tracebacks, using print statements strategically, recognizing the five patterns AI gets wrong most often, and following a systematic five-step loop (reproduce, isolate, identify, fix, verify) every time.
-
Independence (Ch 57): Driving the full TDG cycle from a one-sentence problem statement to working, tested, verified code -- without step-by-step instructions, without scaffolding, without anyone telling you what to do next.
After this phase, the TDG method never changes. Phases 5-9 apply it to new domains (objects, files, APIs, deployment, architecture), but the cycle stays the same: specify, test, generate, verify, debug. Phase 4 is where that cycle becomes yours.
| # | Chapter | Lessons | Key Focus |
|---|---|---|---|
| 56 | Debugging AI-Generated Code | 5 | Tracebacks, print debugging, AI failure patterns, the debugging loop, SmartNotes bug hunt |
| 57 | TDG Mastery | 5 | Problem to spec, complete test suite, generate/verify/debug, search capstone, transfer challenge |
The Debugging Toolkit (Chapter 56)
Chapter 56 gives you four tools and one process:
| Tool | What it does | When to use it |
|---|---|---|
| Traceback reading | Read Python's crash report bottom-up: error type first, then trace the call chain | Code crashes with an error message |
| Print debugging | Add temporary print() statements to see variable values at key points | Code runs but produces wrong results |
| Pattern recognition | Spot the five mistakes AI makes most often (off-by-one, wrong operator, missing edge case, type narrowing, scope error) | Before running: scan the code on first read |
| The debugging loop | Reproduce → isolate → identify → fix → verify | Every time, regardless of the bug type |
The capstone (Lesson 5) puts all four together: five planted bugs in a SmartNotes module, one from each Error Taxonomy category. No hints. Just the tools and the loop.
The Independence Test (Chapter 57)
Chapter 57 removes the scaffolding. Every TDG exercise so far has been guided: "Write this stub. Now write these tests. Now prompt AI." Chapter 57 gives you a problem statement and says "go."
| Step | What you do | Tools |
|---|---|---|
| Specify | Translate English into function stubs with types and docstrings | Your editor + uv run pyright |
| Test | Write the complete test suite before any implementation | uv run pytest (RED) |
| Generate | Prompt AI: "Implement the function that passes these tests" | Claude Code |
| Verify | Run ruff check → pyright → pytest -v | The verification stack |
| Debug | Apply the debugging loop from Ch 56 when tests fail | Tracebacks + prints + patterns |
Lesson 4 is a 25-minute timed challenge: build the SmartNotes search feature from scratch. Then Lesson 5 is the true transfer test: build a completely different feature (reading list builder) with no prior walkthrough. Same method, different algorithm. If both produce green test suites, TDG is yours.
Your PRIMM-AI+ starter kit from Chapter 42 works throughout Phase 4. The /predict, /investigate, /bug, /debug, and /tdg commands continue here. /bug classifies errors and /debug guides the full five-step debugging loop (reproduce, isolate, identify, fix, verify).
What You Need Before Starting
Phase 4 assumes you can:
- Write function stubs with type annotations and
...body (Ch 46, 49) - Write test suites with fixtures, parametrize, and pytest.raises (Ch 52)
- Run the full verification pipeline:
ruff→pyright→pytest(Ch 44) - Iterate on AI output with specific re-prompts (Ch 53)
- Handle exceptions with try/except and raise custom errors (Ch 54)
- Validate external data with Pydantic BaseModel (Ch 55)
If any of these feel shaky, revisit the specific chapter before starting Phase 4.
Phase 4 Complete: What You Can Do Now
By finishing Phase 4, you have built the complete verification pyramid:
- Types at the base (pyright): Catch structural errors before code runs
- Tests in the middle (pytest): Define "correct" and verify every change
- Human review at the top (PRIMM + debugging loop): Catch what automated tools miss
You can now:
- Debug systematically (Ch 56): Read tracebacks, print-debug silent bugs, recognize AI's recurring mistakes, and follow the five-step loop every time.
- Drive TDG independently (Ch 57): Translate a problem statement into a specification, write the test suite, prompt AI, verify the result, and debug failures -- all without scaffolding.
The TDG method is now yours. In Phase 5, you apply it to a new domain: object-oriented programming. The method stays the same. The specifications become class interfaces instead of function signatures.