The Question Tournament
Layers Used: Layer 3 (Live Defence), Layer 5 (Divergence Test)
What You Do
Working in pairs, each student generates 15 questions about the same scenario without AI. Swap question lists with your partner. Rank their 15 questions from most to least diagnostic and write a one-sentence justification for each ranking. Then take the top 5 from each list (10 total), feed them to both Claude and ChatGPT, and compare: which questions actually produced useful, divergent, actionable answers, and which produced generic filler?
Generate your 15 questions, then prompt AI: "You are my study partner. Generate 15 diagnostic questions for this scenario that are different from mine. Do not see my questions first." Once AI generates its 15, rank AI's questions and have AI rank yours. Then proceed with the comparison table. The dynamic is different — AI is more consistent than a human partner — but the skill of evaluating someone else's questions still develops.
Your 15 original questions (written without AI). Your partner's 15 questions with your ranking and justification for each. A comparison table showing the top 10 questions, the AI responses from both tools, and a column marking each as "useful/actionable" or "generic/filler" with explanation.
I am learning to evaluate question quality. Below are two sets of questions
about the same business scenario -- one set written by me and one by my
partner. I have also included the AI responses each question generated.
Please:
(1) Evaluate which set of questions was overall more diagnostic and
explain why.
(2) Identify the 3 strongest questions across both sets and explain what
makes them effective.
(3) Identify the 3 weakest questions and explain what makes them
unproductive.
(4) Were there any questions that seemed good on paper but produced
generic AI responses? Explain why this happened.
(5) Give me specific feedback on how to improve my weakest questions.
Scenario: [paste scenario].
My questions: [paste].
Partner's questions: [paste].
AI responses: [paste comparison table].
Finally, complete the Thinking Score Card for this exercise:
Independent Thinking (1-10), Critical Evaluation (1-10),
Reasoning Depth (1-10), Originality (1-10), Self-Awareness (1-10).
For each score, give a one-sentence justification.
What This Teaches You
You learn that question quality is a skill you can evaluate and improve, not an innate talent. By seeing your partner's questions and having AI compare both sets, you discover questioning patterns you would never notice in your own work. The tournament format makes the difference between a good question and a great question viscerally clear.