Skip to main content
Updated Mar 15, 2026

Build It, Then Break It

Layers Used: Layer 5 (Divergence Test), Layer 3 (Live Defence)

What You Do

Use AI to generate a complete analysis of a topic you know well — your own field, your city, your industry. Because you have domain expertise, you can catch errors the AI makes that a non-expert would miss. Annotate the AI output line by line using the Error Taxonomy. Then pair with a student from a different domain. Exchange your annotated AI outputs. Attempt to verify your partner's annotations — can you confirm their error catches are real? Discuss in a live 10-minute session.

Solo Learner Alternative

If you cannot pair with a domain partner, choose two domains: one you know well and one you know nothing about. Generate AI analyses for both. Annotate errors in your expert domain (where you catch things AI gets wrong) and then attempt to annotate errors in the unfamiliar domain. Compare your detection rate. The gap between the two reveals exactly how much domain expertise matters for error detection.


Your Deliverable

The AI-generated analysis of your domain with line-by-line Error Taxonomy annotations. A separate document listing: errors you caught because of your domain expertise that a non-expert would miss, and errors you suspect exist but cannot confirm without more research. Your partner's annotated output with your verification notes. A reflection (200 words) on the difference between detecting errors in your domain vs. your partner's domain.

AI Check Prompt -- Copy and paste into claude.ai or chatgpt.com
I am a student testing my error detection skills. I asked AI to analyze
a topic I am an expert in: [your domain]. I then annotated the response
with every error I found using this taxonomy: factual error, logical gap,
false confidence, missing context, correlation-causation confusion, outdated
information, fabricated citation, cultural blind spot. Please:

(1) For each error I identified, confirm whether it is a genuine error or
a false positive, and explain your reasoning.
(2) Are there errors in the original AI analysis that I missed? List them
with categories.
(3) Rate my overall error detection accuracy.
(4) Which error categories am I strongest and weakest at detecting in my
own domain?
(5) Rate the depth of my annotations -- am I just flagging errors or am I
explaining WHY they are errors?

AI analysis: [paste].
My annotations: [paste].

Finally, complete the Thinking Score Card for this exercise:
Independent Thinking (1-10), Critical Evaluation (1-10),
Reasoning Depth (1-10), Originality (1-10), Self-Awareness (1-10).
For each score, give a one-sentence justification.

What This Teaches You

You learn that domain expertise is your most powerful error detection tool. In your own field, you catch things AI gets subtly wrong that outsiders would accept. In your partner's field, you discover how much harder error detection is without expertise. This teaches you to be cautious when using AI in domains you do not deeply understand — and to seek expert review when the stakes are high.