User Research — Interviews & Synthesis
You have a problem brief. You know what you are trying to learn. Now comes the part that most product teams skip: actually going out to learn it.
Not because they do not want to talk to users — most PMs do want to. The obstacle is structural. Designing a good interview guide takes two to three hours. Running five interviews takes a full work week when you count scheduling, note-taking, and the synthesis that should follow. And the synthesis itself — turning five sets of interview notes into a coherent, evidence-based product direction — is another four to six hours of structured thinking.
Total investment to do user research properly: one to two weeks of work. Total investment when one interview falls through and another runs long and you still have three sprint planning meetings to run: zero. Research gets deprioritised until the next quarter, when the same conditions apply.
This lesson changes that equation. The /interview command (from the custom product-strategy plugin) designs a research-grade interview guide in minutes. The /synthesize-research command (from the official product-management plugin) turns your interview notes into structured product insights. The PM's job shifts from writing the guide and doing the synthesis to directing the research and evaluating the outputs — which is where judgment matters most.
The Five Interview Design Principles
Good user research is not about asking a lot of questions. It is about asking the right kind of questions in the right order. The /interview skill enforces five principles that distinguish research-grade interview design from the common failure modes.
| Principle | What it means | Right | Wrong |
|---|---|---|---|
| Behavior over opinion | Ask what users DO, not what they THINK | "Walk me through the last time you exported a report" | "How important is exporting to you?" |
| Past over hypothetical | Ask about actual past experiences | "Tell me about the last time you built a dashboard from scratch" | "Would you use a template library?" |
| Problem before solution | Do not mention features or solutions in the first half of the interview | "What happens when you need data your team does not have?" | "Would a real-time refresh feature solve this problem?" |
| Silence is data | When a participant pauses or struggles, wait at least 5 seconds | [silent pause after "What was difficult about that step?"] | "So I guess the difficult part was the export step?" |
| Why five times | When something interesting comes up, ask why — then why again | "Why does that happen?" → "And why is that the way it works?" | "Got it. And what happens next?" |
"Would you use feature X?" is the most common bad question in user research. Users are consistently optimistic about their future behavior — they say yes to features they will never use. The research literature is clear: stated intention is a weak predictor of actual behavior. Ask about the past, not the future.
The Interview Guide Structure
The /interview command produces a 45-minute guide by default. Each segment serves a specific purpose:
| Segment | Duration | Purpose | What it produces |
|---|---|---|---|
| Opening | 5 min | Build rapport; set expectations; get consent | Participant is relaxed and clear on the format |
| Warm-up | 5-10 min | Understand who this person is and how they work | Context for interpreting everything that follows |
| Core Discovery | 20-25 min | Understand the specific problem area in depth | The behavioral observations that become insights |
| Wrap-up | 5 min | Catch anything missed; leave door open | Surprises; referrals to other participants |
The Core Discovery segment is the research. Everything else is scaffolding that makes the Core Discovery segment work. An interview that skips warm-up and jumps to the main questions gets technically correct responses from a participant who does not yet trust the interviewer — which is a different kind of interview than the one you intended to run.
Worked Example: Interview Guide for InsightFlow
The L03 problem brief identified this as a priority discovery question:
"What specific reporting workflows did each of the three enterprise prospects describe? And do our existing enterprise customers have the same need?"
That question requires interviews. Run /interview to generate the guide:
/interview
context: "InsightFlow is a B2B SaaS analytics platform for data analysts
at 100-500 person companies. Primary persona: Analyst Alex, who builds
dashboards without SQL and whose biggest frustration is report requests
taking 3+ days.
Research question: How do analysts at mid-size companies currently
handle reporting workflows — specifically, what are the manual steps,
what tools do they use, and where do they get blocked?
Interview audience: Data analysts and analytics managers at companies
with 100-500 employees who currently build or receive reports.
Interview length: 45 minutes."
Sample /interview output (Core Discovery segment):
INTERVIEW GUIDE: Analyst Reporting Workflows
════════════════════════════════════════════════════════════
Purpose: Understand how analysts currently handle reporting —
the workflows, tools, blockers, and workarounds
Participants: Data analysts and analytics managers; 100-500 person
companies; moderate to high technical skill
Duration: 45 minutes
Interviewer: [Name]
Note-taker: [Name or "interviewer only"]
OPENING (5 minutes)
"Thanks for taking the time today. I'm [name], a PM at InsightFlow.
I'm here to learn from you — there are no right or wrong answers.
I'm interested in how you work with data and reports, not in testing
you on our product. Is it okay if I take notes? Before we start —
any questions for me?"
WARM-UP (8 minutes)
- "Can you tell me a bit about your role and what a typical day
looks like for you?"
- "How does working with data and reports fit into your day-to-day?"
- "How long have you been in this role? Has the way you handle
reporting changed over that time?"
CORE DISCOVERY (22 minutes)
Opening question:
"I'd like to understand how you currently handle reporting at your
company. Can you walk me through the last report you built or
delivered — as concretely as possible? Start from the moment
someone asked for it."
Follow-up depth questions (follow the story):
- "What prompted that request?"
- "What tools were involved in that process?"
- "What was the hardest step? Walk me through it."
- "How long did that typically take you?"
- "What do you do when you get stuck or the data is not where
you need it to be?"
Workaround probe (always ask):
"What did you do before you had [tool they mentioned]?
[Wait] Do you still use any part of that workaround today?"
Frequency and importance probe:
"How often do you go through a process like this?
When this takes longer than expected, what is the downstream impact?"
WRAP-UP (5 minutes)
- "Is there anything about reporting at your company that we have
not talked about that you think I should understand?"
- "If you could change one thing about your current reporting
workflow, what would it be?"
- "Is there someone else on your team who approaches this differently
that I should talk to?"
════════════════════════════════════════════════════════════
Evaluating the Guide
Check the Core Discovery section against the five principles:
Principle 1 — Behavior over opinion: The opening question asks participants to walk through a specific recent experience. It does not ask how important reporting is to them. Compliant.
Principle 2 — Past over hypothetical: "Walk me through the last report you built" is a past-tense behavioral anchor. No hypothetical questions appear in the Core Discovery segment. Compliant.
Principle 3 — Problem before solution: The guide does not mention InsightFlow features, templates, or product capabilities in the discovery questions. Compliant.
Principle 4 — Silence is data: The guide does not explicitly instruct on silence — this is a gap. Add a note: "When participants pause, wait at least 5 seconds before asking a follow-up."
Principle 5 — Why five times: The depth questions include "What was the hardest step?" and follow-ups but do not explicitly chain "why" sequences. Add: "After any strong observation, ask: 'Why does that happen?' then 'And why is that?'"
This guide passes four of five checks. The two gaps are minor — add notes to the guide and run the interviews.
From Guide to Notes to Synthesis
The /interview output is the input to the research process, not the end of it. The sequence is:
/interview → Guide → Run 5+ interviews → Structured notes → /synthesize-research → Insights
The note-taking template produced by /interview is designed to feed /synthesize-research efficiently. Each note-taking session captures:
- Key observations (what the participant did or experienced)
- Notable quotes (verbatim, with participant ID)
- Surprises / unexpected findings
- Follow-up questions for subsequent interviews
Five sets of structured notes produced from a consistent interview guide give /synthesize-research the raw material it needs to run thematic analysis across participants.
Research Synthesis Methodology
/synthesize-research applies thematic analysis to interview notes. Understanding the methodology helps you evaluate whether the output reflects genuine patterns or surface-level clustering.
| Phase | What it does | What to look for in the output |
|---|---|---|
| Familiarisation | Reviews all notes to understand the landscape | Does the synthesis reflect the range of experiences, not just the most common? |
| Initial coding | Tags each observation with descriptive codes | Are codes behavioral ("waited 3 days for data") or attitudinal ("frustrated with the process")? |
| Theme development | Groups codes into candidate themes | Do themes reflect what participants did, or what they said they wanted? |
| Theme review | Checks themes have sufficient evidence | Is each theme supported by at least 3 participants? |
| Theme refinement | Defines and names themes clearly | Is the finding statement one clear, specific sentence? |
The key quality check for any synthesis output: does each finding describe observed behavior, or does it summarise stated preferences?
Behavioral finding: "Participants spent an average of 40-60 minutes reformatting exported data in Excel before sharing reports — a step all five participants described as unavoidable."
Stated preference finding: "Users want better export options."
The first is evidence. The second is the users' proposed solution to the problem the first describes.
Exercise: Interview Guide + Research Synthesis
Plugin (Part 1): Custom product-strategy
Command (Part 1): /interview
Plugin (Part 2): Official product-management
Command (Part 2): /synthesize-research
Time: 30 minutes
Step 1 — Generate the interview guide for your L03 discovery question
From your L03 problem brief's discovery questions, use the following (or your own):
"How do analysts at mid-size companies currently handle reporting when they do not have direct database access?"
Run /interview:
/interview
context: "InsightFlow is a B2B SaaS analytics platform. Primary persona:
Analyst Alex — data analyst at a 100-500 person company, builds dashboards
without SQL, biggest frustration is report requests take 3+ days.
Research question: How do analysts currently handle reporting when they
do not have direct database access? What are the manual steps, what
tools do they use, and where do they get blocked?
Audience: Data analysts at 100-500 person companies with limited SQL access.
Interview length: 45 minutes."
Step 2 — Evaluate the guide against the five principles
For each of the five principles, find one question in the Core Discovery segment and assess: compliant, partial, or violation. If you find a violation, write the corrected question.
Focus on two things:
- Does any question in Core Discovery ask about the future? ("Would you...", "Do you think you would...")
- Does the guide mention InsightFlow features or propose any solution before the wrap-up?
Step 3 — Run /synthesize-research with mock interview notes
Use these simulated interview excerpts (three participants):
--- PARTICIPANT A (Analytics Manager, 200-person SaaS company) ---
Key observations:
- Builds 3-4 reports per week; each takes 45-90 minutes
- Process: pull data from Salesforce → copy to Google Sheets →
pivot table → format → send via email
- "The pivot table step is where I lose time. The data is never
clean enough for the pivot to work on the first try."
- Showed me her Sheets folder: 47 files named "monthly-report",
"monthly-report-v2", "monthly-report-final", etc.
- Does NOT use InsightFlow for this process even though company
has a subscription
Notable quotes:
- "If someone needs a number, they Slack me. I build the report.
They don't know how I got it and they don't care."
- "I would love to not be the human middleware between Salesforce
and a spreadsheet."
Surprises:
- Has a company InsightFlow subscription but has never connected
it to Salesforce — was never shown how during onboarding.
--- PARTICIPANT B (Data Analyst, 400-person retail company) ---
Key observations:
- Primary job is building the "weekly business review" dashboard
for leadership — takes 3 hours every Monday
- Uses 4 tools: BigQuery (data pull) → dbt (transformation) →
Sheets (formatting) → Slides (presentation layer)
- "The Slides step is the one that kills me. Every week I'm
copying numbers from Sheets into the same 12-slide deck."
- Tried to automate with Google Apps Script — "it worked for
two weeks and then broke when the schema changed"
Notable quotes:
- "I have a skill that the company needs. The company is using
me to do copy-paste."
- "I've asked for a BI tool three times. Finance keeps saying
the ROI isn't there."
Surprises:
- Reports to the Head of Finance, not a data or engineering team.
Has no path to getting technical tools approved.
--- PARTICIPANT C (Marketing Analyst, 150-person B2B company) ---
Key observations:
- Builds pipeline reports for Sales every Friday
- Data lives in HubSpot; exports to CSV; manipulates in Excel
- "The export breaks about once a month. HubSpot changes something
and my macros stop working."
- Showed me her Excel file: 23 sheets, colour-coded by quarter
Notable quotes:
- "Every analyst I know has a version of this file."
- "I don't want a fancy BI tool. I want the data to be
in the right place when I need it."
Surprises:
- Uses InsightFlow for a different team's data but considers
it "a different workflow" — has not thought about using it
for the pipeline report.
Run /synthesize-research with these notes:
/synthesize-research
Research type: User interviews (n=3)
Research question: How do analysts currently handle reporting when they
do not have direct database access?
[Paste the three participant notes above]
Synthesise into: key findings by frequency and impact, behavioral
observations vs stated preferences, and a section on what we should
NOT build based on this evidence.
Step 4 — Evaluate the synthesis output
Check the synthesis against these quality criteria:
-
Behavioral vs stated: Does the output distinguish between what participants did (copy data to Sheets every Monday, macros break monthly) and what they said they wanted (a BI tool, automated reports)?
-
Finding specificity: Is each finding a clear, specific statement? "Participants spent 45-90 minutes per report reformatting exported data" is specific. "Users struggle with reporting" is not.
-
What NOT to build: Is there a section that explicitly identifies what the research does NOT support building right now? (If absent, prompt: "Add a section on what this research suggests we should NOT build in the short term and why.")
-
Insight 1 (InsightFlow-specific): Two of the three participants have InsightFlow subscriptions but do not use InsightFlow for the workflow being researched. Does the synthesis flag this as a finding? It should — this is the highest-priority product insight in the notes.
Step 5 — Extend: prompt for persona development
Based on this research synthesis, draft a persona profile for the analyst
who does reporting without direct database access. Use this structure:
- Name and one-line description
- Who they are (role, company, tools used)
- What they are trying to accomplish (goals and jobs to be done)
- Key pain points (top 3 frustrations or workarounds)
- What they value in a solution
- 2-3 representative quotes from the interviews
Then: does this persona overlap with or diverge from InsightFlow's
existing primary persona (Analyst Alex)? What does this tell us
about whether our current persona description is accurate?
The research synthesis you produced in this exercise becomes the evidence layer for your L06 feature spec. When you write a spec for the automation builder, the synthesis provides the behavioral data that grounds your problem statement.
What You Built
You ran the full user research cycle — from interview guide design through research synthesis — using both plugins in sequence. The /interview output (custom plugin) designed a behaviorally-grounded guide. The /synthesize-research output (official plugin) turned structured notes into prioritised, evidence-based insights.
You also practiced the evaluation skill that makes research useful rather than just documented: distinguishing behavioral findings from stated preferences, checking whether the synthesis acknowledges what to NOT build, and verifying that the most counter-intuitive insight (participants have InsightFlow subscriptions but do not use InsightFlow for this workflow) made it into the output.
The synthesis connects forward to Lesson 6: when you write a feature spec for InsightFlow's automation builder, this research is your problem statement's evidence.
Try With AI
Use these prompts in Cowork or your preferred AI assistant.
Prompt 1 — Reproduce (apply what you just learned):
I am a PM at a project management tool. I want to understand how
engineering managers currently run their weekly standup and sprint
progress reviews.
Using the five behavioral interview design principles (behavior over
opinion, past over hypothetical, problem before solution, silence is
data, why five times), design a 30-minute interview guide for
engineering managers at companies with 20-100 engineers.
For each question in the Core Discovery segment, label which principle
it is designed to apply.
What you're learning: Labelling the principle each question applies makes the discipline explicit. Most PMs know the principles in theory; mapping them to specific questions builds the habit of applying them in practice.
Prompt 2 — Adapt (change the context):
Here are two interview questions for a fitness app:
Question A: "Would you use a feature that tracks your sleep
and automatically adjusts your workout plan?"
Question B: "Tell me about the last week when you felt your
workout plan wasn't working. Walk me through what was happening."
Explain why Question A violates the behavioral interview design
principles. Rewrite it as a behaviorally-grounded question.
Then: which of the two questions is more likely to produce
insights that lead to a good product decision? Why?
What you're learning: The contrast between a hypothetical and a behavioral question illustrates the principle more vividly than a definition does. Question A produces optimistic self-prediction; Question B produces evidence.
Prompt 3 — Apply (connect to your domain):
Think of a user research question your team is currently trying to
answer (or has recently tried to answer).
Design one interview question for each of the five behavioral
principles. Then identify which principles your team's current
research practice applies consistently and which it routinely violates.
What would change about your research output if you applied
all five principles every time?
What you're learning: Applying the principles to your own research practice reveals which ones are habits and which are gaps. The principle most teams violate is almost always the same: asking hypothetical questions and treating the answers as evidence.
Flashcards Study Aid
Continue to Lesson 5: Competitive Intelligence →