Skip to main content
Updated Mar 16, 2026

The Error Prediction

AI sounds confident whether it is right or wrong. The student who cannot tell the difference is more dangerous with AI than without it.

Building On Previous Chapters

You will use the Question Formulation skill from Chapter 1 to design your error-detection queries. The Reasoning Receipt format you learned carries forward — annotating AI output becomes second nature from here on.

This chapter trains you to become a systematic error detector. Not vague skepticism ("don't trust AI") but precise, categorized analysis of where and how reasoning breaks. You will develop an Error Taxonomy that you carry through the rest of the book and apply to every AI interaction.

The Error Taxonomy

CategoryWhat It Means
Factual errorA claim that is demonstrably false
Logical gapA conclusion that does not follow from the premises
False confidenceStating uncertain information with unjustified certainty
Missing contextOmitting crucial factors that would change the analysis
Correlation-causation confusionTreating a correlation as proof of causation
Outdated informationUsing data or facts that are no longer current
Fabricated citationReferencing a source that does not exist
Cultural blind spotAssuming one cultural context applies universally

Exercise 1: The Error Prediction

Layers Used: Layer 1 (Predict Before You Prompt), Layer 2 (Reasoning Receipt)

Building On Previous Chapters

You used the Prediction Lock format in Chapter 1, Exercise 1. You predicted question quality there; now you predict error types.

What You Do

You receive a complex question. Before prompting any AI, write down: (a) what you think the correct analysis involves (key factors, tradeoffs, data needed), (b) where you predict AI will be strong in its analysis, and (c) where you predict AI will make errors or miss important context. Submit this prediction. Then prompt both Claude and ChatGPT with the identical question. Annotate each response line by line using the Error Taxonomy: factual error, logical gap, false confidence, missing context, correlation-causation confusion, outdated information, fabricated citation, cultural blind spot.

Choose Your Scenario

Scenario A (Policy): "Should developing nations invest heavily in nuclear energy to meet growing power demands?"

Choose one.


Your Deliverable

Your sealed prediction document (before AI) listing expected AI strengths and weaknesses. Two annotated AI responses with every sentence labeled using the Error Taxonomy categories. A comparison table showing: your predicted errors vs. actual errors found, your predicted strengths vs. actual strengths. A count of each error type found across both tools.

1Your Work

I am learning to detect errors in AI-generated analysis. I asked both Claude and ChatGPT about a scenario question and then annotated both responses using an Error Taxonomy (factual error, logical gap, false confidence, missing context, correlation-causation confusion, outdated information, fabricated citation, cultural blind spot). Please:

(1) Review my error annotations -- did I correctly identify each error? Flag any false positives (things I marked as errors that are actually correct) and false negatives (errors I missed). (2) Rate my error detection accuracy as a percentage. (3) For each error I missed, explain how I should have caught it. (4) Rate my use of the Error Taxonomy -- am I categorizing errors correctly or misclassifying them? (5) What patterns do you see in my error detection -- which types am I good at catching and which do I consistently miss?

Here are the AI responses with my annotations:

Here is my prediction document:

Finally, complete the Thinking Score Card for this exercise: Independent Thinking (1-10), Critical Evaluation (1-10), Reasoning Depth (1-10), Originality (1-10), Self-Awareness (1-10). For each score, give a one-sentence justification.

2Get Your Score

Discuss with an AI. Question your scores.
Come back when you have your BEST evaluation.


What This Teaches You

You learn that error detection is a trainable skill with specific categories, not just a vague feeling that something is off. By predicting AI errors before seeing them, you develop an internal model of where AI fails. The AI self-check reveals your own blind spots — the error types you consistently miss — which is exactly the information you need to improve.

Flashcards Study Aid