Phase 3: Tests as Specification
Chapters 50-55: Verify with Tests and Models
Your role: Verifier: "I can define correct, model data, and prove it"
A type signature tells AI what shape the code should have. A test tells AI what it should do. Phase 3 teaches control flow (how code makes decisions and repeats), data modeling (replacing fragile dicts with structured types), pytest (how you define "correct" before implementation exists), iterating on AI output (the feedback loop that makes TDG reliable), error handling (anticipating what can go wrong), and validation (ensuring external data is safe). By the end, you write complete test suites that serve as the full specification AI implements against.
Phase 3 is split into two sub-phases that follow a pain-driven progression: each concept is earned through frustration with the previous approach.
Phase 3a: Control Flow, Data Models, and Testing (Ch 50-52)
| # | Chapter | Key Focus |
|---|---|---|
| 50 | Control Flow | How code makes decisions and repeats: if/elif/else, for, while, nested loops |
| 51 | Data Models with Dataclasses | Replacing dict[str, str] with @dataclass Note: structured types pyright can verify |
| 52 | pytest Deep Dive | Fixtures, parametrize, coverage: tests as the specification document |
Phase 3b: Iteration, Error Handling, and Validation (Ch 53-55)
| # | Chapter | Key Focus |
|---|---|---|
| 53 | Iterating on AI Output | The feedback loop: evaluate, refine prompts, re-generate, verify |
| 54 | Error Handling and Exceptions | try/except, raise, finally, with open, manual validation pain |
| 55 | Validation with Pydantic | Runtime validation at system boundaries: BaseModel replaces manual checking |