Skip to main content
Updated Mar 07, 2026

The Five Questions — Expert Interview Framework

Method A is used when the knowledge you need to encode lives primarily in the heads of experienced professionals rather than in documents. It applies to domains where the most important expertise is tacit: the financial analyst's risk calibration, the lead architect's coordination judgement, the compliance officer's instinct for which contract clause deserves more scrutiny than its surface reading suggests.

The method is structured around five questions. Not because five is a magic number, but because these five questions, in this order, reliably surface the three kinds of tacit knowledge that most domain agent SKILL.md files need: the decision-making logic the expert applies, the exceptions and edge cases that standard frameworks miss, and the escalation conditions that separate what the agent should handle autonomously from what it should route to a human.

The questions are designed for a conversation, not a form. They are prompts for a structured interview with the domain expert whose knowledge you are encoding — which may be yourself, a colleague, or a subject-matter expert you are engaging specifically for this purpose. A single interview of sixty to ninety minutes, conducted properly with these five questions, produces enough material to write a substantive first-draft SKILL.md. What the interview will not produce, and what no interview can produce, is a complete SKILL.md. The gap between the first draft and the production-ready version is what the Validation Loop in Lesson 8 closes.

Question 1: Walk Me Through a Recent Example of This Work Going Well

Not "what do you do?" and not "what are the best practices for this?" Both of those questions invite the expert to perform their knowledge rather than reveal it. They produce the official version of expertise — the version that appears in job descriptions and training manuals — rather than the operational version that drives actual decisions.

Asking for a specific recent example of work going well does something different. It activates episodic memory rather than semantic memory. The expert does not retrieve a stored description of their expertise; they reconstruct a specific event. And in reconstructing it, they cannot help but include the details that make it specific: what they noticed, what they were uncertain about, what they decided and why, what happened next. These details are the raw material of the SKILL.md.

Follow-up questions: "What did you look for first?" "What told you this was going the right way?" "What would you have done differently if X had been the case instead?"

Credit analyst example: Asked to describe a recent credit assessment that went well, the analyst does not say "I evaluated the financials and checked the ratios." She says: "There was a mid-market manufacturing company applying for a term loan to fund a capacity expansion. The headline numbers were strong — DSCR above 2.0, LTV under 60%. But when I looked at the working capital cycle, I noticed the receivables days had been creeping up over three quarters while revenue was flat. That told me the revenue quality was weakening — they were extending payment terms to keep the topline steady. I flagged it, we restructured the covenant package to include a receivables concentration test, and the deal closed with tighter protections. Six months later, their largest customer went into administration. The covenant saved us."

That single account contains decision-making logic (look at working capital cycle, not just headline ratios), a specific pattern (receivables days increasing while revenue is flat signals revenue quality issues), and a protective action (restructure covenants to include a receivables concentration test). Each of those is a candidate SKILL.md Principle.

Question 2: Tell Me About a Time This Work Went Wrong

Specifically: not because of bad luck, but because of a judgement call that turned out to be mistaken.

This question surfaces the failure modes that the expert has personally encountered and learned from. It is the single most valuable question in the interview because the knowledge it produces is the knowledge that is hardest to find anywhere else. Post-mortems in professional contexts are often sanitised; experts who have made costly mistakes rarely document them in forms that others can access. But in a one-to-one conversation conducted with appropriate professional trust, most experienced professionals will describe at least one instructive failure — and the lesson they drew from it is often the most precise piece of domain knowledge you will extract.

Follow-up questions: "At what point could the mistake have been caught?" "What would have had to be true for you to have made a different call?" "Is there a signal you now look for that you weren't looking for then?"

Credit analyst example: "Early in my career, I approved a facility for a property developer. The balance sheet was strong, the LTV was conservative, and the development had pre-sales. What I missed was that the pre-sales were conditional — the contracts had break clauses tied to planning permission for a second phase. When the second phase was refused, the pre-sales unwound, and the developer's cash position deteriorated faster than the financial model had projected. I now always read the underlying contracts on any pre-sale figure, not just the headline number. And when a revenue line depends on a condition outside the borrower's control, I stress-test the scenario where that condition fails."

The Principles this produces are specific and testable: "When revenue projections depend on pre-sales, verify whether the underlying contracts are conditional. When any revenue line depends on a condition outside the borrower's control, run a stress scenario where that condition fails." These are instructions that prevent a specific, real failure mode — not generic advice about being careful.

Question 3: What Does a Junior Get Wrong That a Senior Never Does?

This question is the most efficient path to the gap between described and actual expertise. Every experienced professional can answer it immediately, because it is the knowledge they spend their career transmitting to the people who work for them. And because they are describing someone else's errors rather than their own, the defences that come up in Question 2 are lower.

The answers almost always follow a pattern: the junior professional applies the rule without reading the context, or reads the context without knowing which rules apply to it, or escalates too early because they lack confidence, or escalates too late because they lack humility.

Follow-up questions: "Can you give me a specific example?" "What does the senior professional see that the junior one doesn't?" "How long does it typically take someone to learn this, and why does it take that long?"

Credit analyst example: "The junior analyst flags every net debt increase as a concern. The senior analyst knows that a net debt increase in the context of a capital investment programme with contracted revenue is categorically different from a net debt increase driven by operating losses. The junior analyst treats a covenant breach as binary — breached or not. The senior analyst reads the covenant with the loan documentation in hand and asks whether the breach is technical or substantive, whether the remedy period has been used correctly, and whether the breach pattern suggests deterioration or an isolated event."

Each of those distinctions — context-dependent interpretation of net debt, technical vs substantive covenant breaches — is a SKILL.md Principle. They are the instructions that encode the expertise differential between a junior analyst who applies rules mechanically and a senior analyst who reads context.

Question 4: Write a One-Page Decision Guide

If you had to write a one-page guide for this work — something that would help someone make the right call in ninety percent of situations — what would be on it?

This question asks the expert to compress their operational knowledge into transferable instructions. Most experts resist the framing initially — "it's more complicated than a one-pager can capture" — and they are correct. But the point of the question is not to produce the finished SKILL.md. It is to identify what the expert believes are the most load-bearing principles in their practice, because those are the instructions that need to appear in every version of the SKILL.md, however much else changes.

Follow-up questions: "What's the first thing on the page?" "What's the thing you'd most want to prevent someone from doing?" "Is there a heuristic you use that isn't in this guide because it's too hard to explain?"

That last follow-up is important. The knowledge that is too hard to explain is often the knowledge most worth encoding, and it requires more interview time to surface.

Credit analyst example: "First: always read the cashflow statement before the balance sheet. The balance sheet tells you what exists; the cashflow statement tells you what is happening. Second: never trust a revenue figure you cannot trace to a contract or a customer. Third: when the management narrative and the numbers tell different stories, trust the numbers. Fourth: if you cannot explain the credit risk in two sentences, you do not understand it well enough to approve it."

These four heuristics are load-bearing Principles. The first governs analytical sequence. The second governs source verification. The third governs conflict resolution between qualitative and quantitative information. The fourth is a self-test for decision readiness. All four translate directly into SKILL.md instructions.

Question 5: What Should an Automated System Never Handle?

What are the situations where you would not trust an automated system to handle this — and why?

This question defines the human-in-the-loop requirements for the SKILL.md. Every domain has conditions under which autonomous agent operation is inappropriate: not because the technology is insufficient, but because the professional judgement required to handle those conditions correctly is genuinely irreplaceable.

The answers typically cluster into three categories.

CategoryWhat It MeansSKILL.md Output
Stakes too highThe consequences of a systematic error are unacceptable at any rateExplicit routing rules with thresholds
Context too unusualStandard rules do not apply and the agent cannot know it does not knowUncertainty recognition instructions
Relationship is the serviceThe human interaction is part of the professional valueBoundary conditions on delegation

Follow-up questions: "What is the threshold where you would want a human involved regardless of the system's track record?" "Can you describe a situation where the context was so unusual that no standard procedure applied?"

Credit analyst example: "Any credit decision above £25 million goes to the senior credit committee regardless of how strong the analysis looks — the reputational risk of a single large default is too high to accept any systematic error rate. Any situation where the borrower has a relationship with a board member or senior executive gets routed to an independent reviewer — the conflict of interest makes automated analysis inappropriate. And any credit assessment where I encounter a fact pattern I genuinely have not seen before — a novel industry structure, a regulatory regime I am not familiar with — I flag it explicitly and bring in a specialist rather than applying a framework that may not fit."

These three answers produce three distinct types of SKILL.md escalation conditions: threshold-based routing (£25 million), relationship-based routing (conflict of interest), and uncertainty-based routing (novel fact patterns). All three are essential for a production-quality SKILL.md in this domain.

How the Five Questions Map to the SKILL.md

The five questions are not random probes. Each one targets specific raw material for specific sections of the Agent Skills Pattern.

QuestionPrimary TargetSKILL.md Section
Q1: Recent successDecision-making logic, analytical sequencePrinciples (operational logic)
Q2: Instructive failureDefensive knowledge, error preventionPrinciples (what NOT to do)
Q3: Junior vs seniorExpertise differential, contextual judgementPrinciples (nuanced distinctions)
Q4: One-page guideLoad-bearing heuristics, core operating rulesPrinciples (non-negotiable rules)
Q5: Automation boundariesEscalation conditions, human-in-the-loop gatesQuestions (out of scope) + Principles (routing logic)

The Persona section draws from all five questions — the professional identity that emerges from the interview as a whole. But the Principles section is where the majority of the extraction material lands, because the Principles are where tacit knowledge becomes explicit instruction.

Try With AI

Use these prompts in Anthropic Cowork or your preferred AI assistant to practise the interview framework.

Prompt 1: Self-Interview

I want to practise the Knowledge Extraction Method on myself.
I work as [YOUR ROLE] in [YOUR INDUSTRY].

Ask me the five extraction questions in sequence. For each question:
1. Ask the main question
2. Wait for my answer
3. Ask two follow-up questions based on what I actually said
(not generic follow-ups)
4. After my follow-up answers, summarise the tacit knowledge
that surfaced and identify which SKILL.md section it maps to

After all five questions, produce a "north star summary" — two
paragraphs describing the most important decision-making logic
and the most important escalation condition that emerged.

What you're learning: The five questions work on any domain, including your own. By experiencing them as the interviewee, you develop intuition for what rich answers feel like versus surface-level ones — which is essential preparation for conducting the interview with someone else. The north star summary at the end previews the synthesis technique taught in Lesson 3.

Prompt 2: Question Design Analysis

Analyse why the five extraction questions are ordered the way they are.
For each question, explain:

1. What psychological state is the interviewee in at this point in
the conversation?
2. Why does THIS question work better at this position than earlier
or later?
3. What does the question's output provide that earlier questions
have not yet surfaced?

Then identify what would go wrong if the questions were asked in
reverse order (5, 4, 3, 2, 1). What kind of extraction material
would you lose and why?

What you're learning: The question sequence is a designed progression, not an arbitrary list. Understanding the design logic helps you adapt the questions to domains where the standard sequence may need adjustment — for example, when an expert is most forthcoming about failures early in the conversation rather than after building trust through a success story.

Prompt 3: Domain Adaptation

I need to adapt the five extraction questions for [SPECIFIC DOMAIN:
e.g., contract law, clinical nursing, software architecture,
management consulting].

For each of the five questions, generate:
1. The domain-adapted version (same intent, domain-specific framing)
2. Three domain-specific follow-up questions
3. A realistic example of what a senior professional in this domain
might answer
4. The SKILL.md Principle that would emerge from that answer

Present the results as five blocks, one per question.

What you're learning: The five questions are a framework, not a script. Adapting them to a specific domain requires understanding their extraction purpose well enough to reformulate them without losing their effectiveness. This exercise also produces a working interview guide you can use in your own extraction work.

Flashcards Study Aid


Continue to Lesson 3: Conducting the Expert Interview →