MVP — The Minimum That Validates
The MVP (Minimum Viable Product) is the most misunderstood concept in startup methodology. Teams build products that are "minimum" in the sense that they cut every feature they could not finish in time — and call the resulting incomplete product their MVP. This is not minimum viable thinking. It is scope management with better marketing.
The real MVP is not the minimum product you could ship. It is the minimum product that tests your most critical assumptions at the lowest possible cost. These are very different things. An MVP might be a landing page, a concierge service, or a five-feature SaaS application — depending on which assumptions you need to test. The question that defines the MVP is not "what is the smallest thing we can build?" It is "what is the smallest thing we need to build to learn whether our critical assumptions are correct?"
In Lesson 5, you mapped your assumptions and ranked them by risk. Now you scope the smallest product that tests the top three.
The Purpose of an MVP
The MVP exists to answer one question: are the TIER 1 assumptions correct?
Every feature in an MVP must earn its place by testing a critical assumption. If you cannot answer "which assumption does this feature test?", the feature does not belong in the MVP. If a feature tests no critical assumption, it is scope that delays learning.
This reframing changes how you approach the feature list. The question is never "should we include this feature?" It is "which assumption does this test, and is that assumption critical enough to justify the build time?"
| Wrong Question | Right Question |
|---|---|
| What features should we build? | What assumptions do we need to test? |
| What is the minimum product? | What is the minimum test of critical assumptions? |
| How quickly can we build all this? | What can we NOT build without losing learning? |
| What will impress investors? | What will tell us if we have a real business? |
Feature Inclusion Criteria
A feature belongs in the MVP if — and only if — it tests a TIER 1 or TIER 2 assumption from your assumption map.
The test: for every feature candidate, ask:
- Which specific assumption does this test?
- Is that assumption TIER 1 or TIER 2?
- Is this the cheapest way to test that assumption?
If the answer to question 2 is TIER 3, the feature does not belong in the MVP. If the answer to question 3 is "no, we could test this assumption with a conversation", the feature does not belong in the MVP at this stage.
The exclusion list is equally important. For every excluded feature, document: (a) which assumption it tests, and (b) why that assumption does not need to be tested in the MVP. This makes the exclusion defensible when stakeholders push back, and it records the MVP design decision for the post-pilot review.
Success and Failure Criteria
Success criteria define what "the MVP worked" means. They must be:
- Specific — a number, not a description
- Measurable — observable directly from pilot data
- Tied to assumptions — each criterion validates or invalidates a specific assumption
Failure criteria define the pre-agreed trigger for a pivot conversation. Writing failure criteria before the pilot is the most important discipline in the MVP design process. After six weeks of working with pilot customers, the team is emotionally attached to the idea. Failure criteria were written when you were not attached — they are the instructions you left yourself for when objectivity is hardest to maintain.
Failure criteria are not failures. They are the pre-agreed point at which the data tells you to change direction — before you have invested another six months in the wrong direction.
The AP Automation MVP
Worked example. With the assumption map from Lesson 5 — TIER 1 assumptions A-001 (price), A-002 (AI accuracy), A-003 (buying authority) — you scope the minimum viable product:
/hypothesis
Design the MVP for my AP automation SaaS.
Context: Assumption map from Lesson 5.
Most critical assumptions to test: A-001 (CFOs pay $500/month),
A-002 (AI matching accuracy >90%), A-005 (finance team adoption).
Team: 2 developers, 1 founder-salesperson.
Time budget: 8 weeks to MVP.
Target: 3 paying pilot customers at $500/month each.
Design the MVP:
1. Minimum feature set (what is absolutely necessary to test critical
assumptions — and which assumption each feature tests)
2. Features explicitly excluded (and why — what do they NOT test
that makes them non-critical for the MVP?)
3. Success criteria (specific, measurable, tied to assumptions)
4. Failure criteria (specific triggers for pivot conversation)
5. Week-by-week build plan
Sample MVP design output:
MVP DESIGN DOCUMENT
Venture: AP Automation SaaS | Timeline: 8 weeks | Target: 3 pilots
════════════════════════════════════════════════════════════
CORE PURPOSE OF THIS MVP:
Test three things: (1) Will 3 companies pay $500/month before the
full product exists? (2) Can our AI match invoices at >90% accuracy
on real customer invoice formats? (3) Will finance teams actually
use a new tool in their daily workflow?
MINIMUM FEATURE SET (build these; nothing else):
FEATURE 1: Invoice ingestion from email + WhatsApp
Tests: Foundation for all other features — without this, nothing works
Spec: Forward any invoice to a dedicated email address or WhatsApp number;
system extracts: vendor name, amount, invoice number, date.
Human override allowed at this stage.
FEATURE 2: AI-powered PO matching with confidence score
Tests: A-002 directly — the core technical claim
Spec: Match extracted invoice data to open POs in uploaded Excel.
Show: matched PO, confidence %, unmatched items.
Human override is fine — perfect accuracy not required.
FEATURE 3: WhatsApp approval workflow
Tests: A-005 (adoption) via the channel finance teams already use
Spec: When invoice is ready for approval, send WhatsApp message to CFO:
"[Vendor]: $[X] — Invoice #[N] — Match: 97% confidence.
Reply APPROVE or REJECT."
Log the response with timestamp and phone number.
FEATURE 4: Simple AP dashboard (web only — no mobile app)
Tests: Whether CFOs value real-time visibility
Spec: Table of all invoices, status (pending/approved/paid),
total outstanding, this month's payments. Export to CSV.
FEATURE 5: Manual payment processing confirmation
Tests: Closing the workflow loop — but no automated bank integration
Spec: User manually marks invoice as "paid." System logs date and
generates a payment record.
EXPLICITLY EXCLUDED (do not build in 8 weeks):
✗ ERP integration — tests A-006 (ERP dependency) which is TIER 2;
manual CSV export is sufficient to validate the MVP learning
✗ Mobile app — web is sufficient; mobile is an adoption assumption
that does not need to be tested before proving core value
✗ Multi-currency — all pilots are single-currency; no learning
from adding complexity without a multi-currency pilot
✗ Advanced reporting — basic dashboard sufficient; detailed reports
are a TIER 3 feature that does not affect core validation
✗ Automated bank payments — significant regulatory complexity;
tests no critical assumption in the 8-week window
✗ OCR for handwritten invoices — approximately 70% of invoices
are digital; handwritten is V2 if pilots reveal it as critical
SUCCESS CRITERIA (pivot-or-continue decision at Week 8):
✓ 3 signed pilot agreements at $500/month — validates A-001
✓ AI matching accuracy >90% on pilot customer invoices — validates
A-002 (partially; 95% is V1 target, 90% is MVP acceptable)
✓ >70% of invoices processed through system by Week 4 of pilot —
validates A-005
✓ At least 1 CFO says: "I would be comfortable showing this to
my auditor" — validates the emotional job from discovery
FAILURE CRITERIA (triggers pivot conversation):
✗ Cannot sign 3 pilots at $500/month → A-001 wrong; pivot on price or
customer segment (try $350/month or a different revenue model)
✗ AI accuracy <80% on pilot invoices → A-002 wrong; pivot to
semi-automated model with human review step
✗ <50% invoice adoption after 4 weeks → product/UX issue or wrong
customer segment; investigate before V1 build
8-WEEK BUILD PLAN:
Week 1-2: Email + WhatsApp ingestion; data extraction
Week 3-4: PO matching model; confidence scoring; basic dashboard
Week 5-6: WhatsApp approval workflow; manual payment confirmation
Week 7: Internal testing; performance testing on 500 sample invoices
Week 8: Pilot customer onboarding; measurement begins
════════════════════════════════════════════════════════════
The exclusion list deserves attention. Six features are explicitly out — not because they are bad features, but because they test no critical assumption in the eight-week window. ERP integration might be the most commercially significant feature in V1, but it tests A-006 (ERP dependency), which is a TIER 2 assumption. The pilot will tell you whether customers need ERP integration before they will renew. You do not need to build it to learn that.
For intrapreneurs, the MVP is often a pilot programme with one internal team or one existing customer segment — not a deployed product. Your "build plan" may be a staffing request and a timeline rather than a development sprint. And your "success criteria" may be adoption metrics and qualitative feedback rather than payment at a specific price point. The structure is identical: define what you are testing, what success looks like, and what would trigger a direction change.
Exercise: MVP Scoping (Exercise 3, Part 2)
Type: Lean Startup — MVP Design Time: 40 minutes Goal: Design the minimum viable product for your venture using the assumption map from Lesson 5
From Exercise 3 Part 1 (Lesson 5), you have: a full assumption map with TIER 1/2/3 classifications and a 4-week validation plan. Your top 3 TIER 1 assumptions are the scope boundary for this exercise.
Step 4 — MVP scoping using /hypothesis.
/hypothesis
Design the minimum viable product for my venture.
Venture: [Your idea from Lesson 4, 2-3 sentences]
Team: [Your team size and skills]
Time budget: [Your available time — e.g., 6 weeks, 3 months]
Top 3 critical assumptions to test:
A-001: [Description]
A-002: [Description]
A-003: [Description]
Design the MVP:
1. Minimum feature set — for each feature, which assumption it tests
2. Explicit exclusion list — for each excluded feature, why it tests
no critical assumption at this stage
3. Success criteria (specific and measurable, tied to assumptions)
4. Failure criteria (specific thresholds that trigger pivot conversation)
5. Week-by-week build plan
Constraint: No feature that tests only a TIER 3 assumption belongs
in the MVP.
Review the output. For each included feature, ask: "If I removed this feature, would I lose the ability to test a critical assumption?" If the answer is no, the feature does not belong. Be honest about scope creep disguised as "minimum viable".
Deliverable: An MVP scoping document with:
- Included features with assumption mapping
- Excluded features with rationale
- 3-4 success criteria (specific, measurable)
- 2-3 failure criteria (specific thresholds with pivot implications)
- Week-by-week build plan
Your MVP design feeds directly into Lesson 7 (Build-Measure-Learn). When you complete your pilot, you will analyse the results against the success and failure criteria you defined here. Save this document — the pre-pilot criteria are the objective baseline you will need when the post-pilot data comes in and emotional attachment makes objectivity difficult.
Try With AI
Use these prompts in Cowork or your preferred AI assistant.
Reproduce — Run the chapter's worked example:
/hypothesis
Design the MVP for an AP automation SaaS.
Team: 2 developers, 1 founder.
Timeline: 8 weeks.
Target: 3 paying pilot customers at $500/month.
Critical assumptions to test:
- CFOs will pay $500/month before the full product exists
- AI PO matching can achieve >90% accuracy on real invoice formats
- Finance teams will adopt the system (>70% of invoices through system
by Week 4 of pilot)
Design:
1. Minimum feature set with assumption mapping
2. Explicit exclusion list with rationale
3. Success criteria (specific and measurable)
4. Failure criteria (triggers for pivot decision)
5. 8-week build plan
What you are learning: Notice how the exclusion list (6 features out) is longer than the inclusion list (5 features in). This is intentional — the AP automation MVP is designed to test assumptions with a small feature set, not to build a complete product. The rationale for each exclusion is as important as the feature list itself.
Adapt — Scope an MVP with different constraints:
/hypothesis
Design an MVP for a freelance marketplace connecting designers
with small business owners.
Team: 1 developer (part-time), 1 founder.
Timeline: 4 weeks.
Budget: $2,000 for tooling.
Critical assumptions to test:
- Small business owners will pay $99/month for on-demand design help
- Freelance designers will accept 70% revenue share
- Job-matching can happen within 24 hours via a curated shortlist
Design the MVP with feature in/out list, success/failure criteria,
and 4-week build plan.
What you are learning: A two-sided marketplace MVP has double the critical assumptions — one set for each side of the market. Notice how the success criteria must reflect both: designer acquisition AND buyer activation. A marketplace that signs 100 designers but no buyers has not validated anything critical.
Apply — Design your own MVP:
/hypothesis
Design the MVP for my venture.
Venture: [Description from Lesson 4]
Team: [Your team and skills]
Timeline: [Your available time]
Top 3 critical assumptions from my Lesson 5 map:
1. [A-001 description]
2. [A-002 description]
3. [A-003 description]
Design the MVP:
1. Minimum feature set — each feature must map to a critical assumption
2. Explicit exclusion list — features I am choosing NOT to build and why
3. Specific, measurable success criteria
4. Failure criteria with pivot implications
5. Week-by-week build plan
What you are learning: The prompt requires you to state your top 3 critical assumptions explicitly before asking for the MVP design. If you cannot state them clearly, go back to Lesson 5 and refine your assumption map. The MVP design is only as good as the assumption clarity underneath it.
Flashcards Study Aid
Continue to Lesson 7: Build-Measure-Learn →