Chapter 2: Detecting Broken Reasoning
AI sounds confident whether it is right or wrong. The student who cannot tell the difference is more dangerous with AI than without it.
Core Skill: Verification and Discernment
You will use the Question Formulation skill from Chapter 1 to design your error-detection queries. The Reasoning Receipt format you learned carries forward — annotating AI output becomes second nature from here on.
This chapter trains you to become a systematic error detector. Not vague skepticism ("don't trust AI") but precise, categorized analysis of where and how reasoning breaks. You will develop an Error Taxonomy that you carry through the rest of the book and apply to every AI interaction.
Teaching Aid
What You Will Learn
- How to predict where AI will fail before prompting it
- The 8-category Error Taxonomy for classifying AI failures (factual error, logical gap, false confidence, missing context, correlation-causation confusion, outdated information, fabricated citation, cultural blind spot)
- How to build a more rigorous analysis than either AI tool through iterative drafts
- How domain expertise is your most powerful error detection tool
- How to calibrate your confidence in AI accuracy under time pressure
Exercises
| Exercise | Title | Layers Used | What You Build |
|---|---|---|---|
| 1 | The Error Prediction | Layer 1, Layer 2 | Error prediction document + annotated AI responses |
| 2 | The Contradiction Test | Layer 4, Layer 6 | Three-draft analysis with evolution notes |
| 3 | Build It, Then Break It | Layer 5, Layer 3 | Domain expertise annotations + cross-domain verification |
| 4 | Confidence Calibration | Layer 1, Layer 6 | 20-claim Confidence Calibration Chart |
Chapter Deliverable
An Error Detection Portfolio containing all four exercise deliverables plus all AI feedback responses with your reflections on each. This exercise is repeated at the end of the book to measure how much your calibration improves.