Skip to main content
Updated Mar 16, 2026

Chapter 2: Detecting Broken Reasoning

AI sounds confident whether it is right or wrong. The student who cannot tell the difference is more dangerous with AI than without it.

Core Skill: Verification and Discernment

Building On Previous Chapters

You will use the Question Formulation skill from Chapter 1 to design your error-detection queries. The Reasoning Receipt format you learned carries forward — annotating AI output becomes second nature from here on.

This chapter trains you to become a systematic error detector. Not vague skepticism ("don't trust AI") but precise, categorized analysis of where and how reasoning breaks. You will develop an Error Taxonomy that you carry through the rest of the book and apply to every AI interaction.

Teaching Aid

What You Will Learn

  • How to predict where AI will fail before prompting it
  • The 8-category Error Taxonomy for classifying AI failures (factual error, logical gap, false confidence, missing context, correlation-causation confusion, outdated information, fabricated citation, cultural blind spot)
  • How to build a more rigorous analysis than either AI tool through iterative drafts
  • How domain expertise is your most powerful error detection tool
  • How to calibrate your confidence in AI accuracy under time pressure

Exercises

ExerciseTitleLayers UsedWhat You Build
1The Error PredictionLayer 1, Layer 2Error prediction document + annotated AI responses
2The Contradiction TestLayer 4, Layer 6Three-draft analysis with evolution notes
3Build It, Then Break ItLayer 5, Layer 3Domain expertise annotations + cross-domain verification
4Confidence CalibrationLayer 1, Layer 620-claim Confidence Calibration Chart

Chapter Deliverable

An Error Detection Portfolio containing all four exercise deliverables plus all AI feedback responses with your reflections on each. This exercise is repeated at the end of the book to measure how much your calibration improves.