مرکزی مواد پر جائیں

Phase 3: Tests as Specification

Chapters 50-55: Verify with Tests and Models

Your role: Verifier: "I can define correct, model data, and prove it"

A type signature tells AI what shape the code should have. A test tells AI what it should do. Phase 3 teaches control flow (how code makes decisions and repeats), data modeling (replacing fragile dicts with structured types), pytest (how you define "correct" before implementation exists), iterating on AI output (the feedback loop that makes TDG reliable), error handling (anticipating what can go wrong), and validation (ensuring external data is safe). By the end, you write complete test suites that serve as the full specification AI implements against.

Phase 3 is split into two sub-phases that follow a pain-driven progression: each concept is earned through frustration with the previous approach.

Phase 3a: Control Flow, Data Models, and Testing (Ch 50-52)

#ChapterKey Focus
50Control FlowHow code makes decisions and repeats: if/elif/else, for, while, nested loops
51Data Models with DataclassesReplacing dict[str, str] with @dataclass Note: structured types pyright can verify
52pytest Deep DiveFixtures, parametrize, coverage: tests as the specification document

Phase 3b: Iteration, Error Handling, and Validation (Ch 53-55)

#ChapterKey Focus
53Iterating on AI OutputThe feedback loop: evaluate, refine prompts, re-generate, verify
54Error Handling and Exceptionstry/except, raise, finally, with open, manual validation pain
55Validation with PydanticRuntime validation at system boundaries: BaseModel replaces manual checking
Starter Kit

Your PRIMM-AI+ starter kit from Chapter 42 works throughout Phase 3. The /predict, /investigate, /bug, and /tdg commands you used in Phase 2 continue here. The same 8 commands cover all Phase 3 workflows: no new commands needed.