Skip to main content

The Decision Framework

SDD is a power tool, not a universal solution. The wisdom is knowing when NOT to use it.

In Lesson 6, you learned the full implementation workflow: task delegation, context isolation, atomic commits. You now have the complete SDD toolkit. But a toolkit isn't valuable if you use a sledgehammer for every nail.

This lesson develops your judgment. By the end, you'll recognize task characteristics that signal "use SDD" versus "skip the ceremony." You'll learn the lightweight spec pattern for borderline cases. And you'll understand the common critiques of SDD so you can navigate them thoughtfully.

When SDD Excels

The four-phase workflow adds overhead: research time, specification writing, interview questions, task orchestration. That overhead pays dividends when the alternative is worse.

ScenarioWhy SDD Helps
Large refactors (15+ files)Upfront spec prevents mid-refactor pivots
Unclear requirementsResearch phase surfaces what you don't know
Learning new librariesParallel research accelerates understanding
Team coordinationSpec serves as contract between implementers
Legacy modernizationCaptures original intent before rebuilding

The pattern: These scenarios share a characteristic—complexity that exceeds working memory. You can't hold a 15-file refactor in your head. You can't remember all the assumptions you made while exploring an unfamiliar library. The specification becomes external memory that preserves decisions across sessions and collaborators.

Large Refactors: The Specification as Anchor

Without a spec, large refactors drift. You start renaming a module, realize you need to update imports, discover those imports have side effects, chase those side effects into another module, and suddenly you're four directories deep with no clear picture of what you've changed or why.

With a spec, you define the end state upfront. Each task references that spec. When you're deep in the fourth module, the spec reminds you: "We're renaming for consistency, not refactoring for performance." That anchor prevents well-intentioned tangents from derailing the project.

Unclear Requirements: Research Reveals Structure

When requirements are fuzzy, SDD's research phase transforms vague goals into concrete specifications. The research isn't just about finding code patterns—it's about discovering what questions you didn't know to ask.

A developer approaching "add real-time collaboration" without SDD might jump straight to WebSocket code. With SDD, the research phase reveals: What conflict resolution strategy? What persistence model? What happens offline? By the time you write the spec, you understand the problem well enough to solve it.

Learning New Libraries: Parallel Acceleration

Learning a new library through vibe coding means sequential trial and error. You try something, it fails, you search for examples, try again. Each failure teaches one lesson.

SDD's parallel research pattern changes this. You spawn subagents to investigate different aspects simultaneously: one explores authentication patterns, another investigates pagination, a third examines error handling. Research that takes hours sequentially completes in minutes—and you have a comprehensive spec before writing any application code.

When SDD Is Overkill

The same overhead that pays dividends on complex tasks wastes time on simple ones.

ScenarioWhy Skip SDD
Single-file bug fixesThree-document workflow for one-line fix wastes time
Well-defined simple featuresImplementation is obvious; spec adds no value
Exploratory prototypingYou're discovering requirements; vibe coding is faster
Production incidentsNeed immediate action, not spec documents

The pattern: These scenarios share a characteristic—the solution is either obvious or unknowable. When you know exactly what to change, specifying it first adds no information. When you're exploring to discover what's possible, specifying upfront constrains discovery.

Bug Fixes: Match Effort to Impact

A null pointer exception in line 47 of a utility function doesn't need a specification. You know the problem. You know the fix. The overhead of research, spec, interview, and tasks would take longer than just fixing the bug.

Heuristic: If you can explain the fix in one sentence, skip SDD. "Add null check before accessing user.preferences" doesn't benefit from a formal specification.

Exploratory Prototyping: Discovering the Problem

Sometimes you don't know what you're building until you build something. A creative exploration—"What if we visualized this data differently?"—benefits from rapid iteration, not upfront specification.

Vibe coding serves exploration. You try an approach, see what happens, adjust. The "specification" emerges from what you learn. Formalizing too early constrains the creative process.

Important: Exploration is a phase, not an end state. Once you discover what works, you should often write a specification before building the production version. Exploration generates understanding; SDD turns that understanding into reliable implementation.

Production Incidents: Time Sensitivity Overrides Process

When the system is down, you fix it first and document later. A production incident isn't the time for research phases and interview questions.

Post-incident: After resolving the immediate issue, SDD can help with the follow-up. "Prevent this class of failure" is the kind of unclear-requirement task where SDD excels. The incident response is direct; the long-term fix benefits from specification.

The Decision Heuristic

For quick classification, apply this logic:

IF files_affected > 5 OR requirements_unclear OR learning_new_tech:
Use SDD
ELSE IF single_file AND bug_fix:
Skip SDD
ELSE:
Judgment call — try lightweight spec first

The variables:

  • files_affected > 5 — Changes across multiple files compound complexity. Coordination costs rise.
  • requirements_unclear — If you can't explain the deliverable in one paragraph, research phase adds value.
  • learning_new_tech — Unfamiliar territory benefits from parallel research.
  • single_file AND bug_fix — Known problem, contained scope, obvious solution.

Most tasks fall into the "judgment call" category. That's where the lightweight spec pattern helps.

The Lightweight Spec Pattern

When you're uncertain whether SDD is warranted, start light:

# Task: [One-line description]

## Constraints

- [What NOT to do]
- [Boundaries on scope]

## Success Criteria

- [ ] [Measurable outcome 1]
- [ ] [Measurable outcome 2]

That's it. No reference architecture analysis. No multi-phase implementation plan. Just constraints and success criteria.

Why this works: Constraints prevent scope creep. Success criteria define done. These two elements provide 80% of specification value with 20% of the overhead.

When to expand: If writing the lightweight spec reveals complexity—"Wait, how DO we handle the existing data?"—that's your signal. Convert to a full specification with research phase and interview. The lightweight spec became the seed for comprehensive planning.

When to ship as-is: If the lightweight spec feels sufficient—constraints are clear, success criteria are unambiguous—proceed directly to implementation. The spec exists for reference without the full ceremony.

The Judgment Skill

Experienced practitioners develop intuition for when spec ceremony pays off. This intuition isn't mystical—it's pattern recognition from feedback loops.

Building judgment requires:

  1. Tracking outcomes. When a project struggles, ask: Would a specification have helped? When a project succeeds quickly, ask: Did I need the spec, or did I write it out of habit?

  2. Calibrating overhead. Time your specification writing. If a lightweight spec takes 10 minutes and a full spec takes 2 hours, you know the investment you're making. Match investment to risk.

  3. Recognizing signals. Certain project characteristics predict specification value:

    • Multiple collaborators → Spec as contract
    • Unfamiliar domain → Research phase valuable
    • Reversibility low → Upfront planning justified
    • Time horizon long → Specification ages better than memory

Critiques and Counterpoints

SDD isn't universally praised. Research identifies legitimate concerns worth acknowledging.

Critique 1: "This Is Just Waterfall"

The concern: Sequential phases (research → spec → implement) resemble waterfall methodology, which failed precisely because upfront planning couldn't anticipate implementation reality.

The counterpoint: SDD phases aren't isolated handoffs. The spec updates during implementation when reality diverges from plan. The difference from waterfall: tasks are atomic and reversible (git revert), not months-long commitments with no feedback.

Critique 2: "Double Code Review"

The concern: Reviewing the spec AND reviewing the code doubles overhead.

The counterpoint: Spec review catches design problems; code review catches implementation problems. They're different error classes. Finding a design flaw during code review means discarding implementation work. Spec review surfaces design issues when changes are cheap.

Critique 3: "Diminishing Returns at Scale"

The concern: As projects grow, specification maintenance becomes its own burden.

The counterpoint: This is legitimate. Long-running projects must decide whether to maintain specs as living documents (Spec-Anchored level) or treat them as implementation kickoff artifacts (Spec-First level). The choice depends on organizational needs.

Critique 4: "Overhead on Simple Tasks"

The concern: Not every change needs four phases and three documents.

The counterpoint: Agreed. That's the point of this lesson. The decision framework exists precisely because SDD isn't always appropriate. The skill is knowing when.

Try With AI

Running Example Concluded: We completed "Personal AI Employees in 2026" using the full workflow. Now we reflect: was SDD worth it?

Prompt 1: Retrospective on Our Running Example

We used SDD to write a CTO-facing report on AI tools:
- Research: 4 agents investigated tools, ROI, risks, trajectory
- Specification: report-spec.md with audience and constraints
- Refinement: Interview clarified CTO context and decision needs
- Implementation: Task-based writing with section commits

Was this overkill for a 2000-word report? Walk through:
- What would we have missed with vibe coding?
- What overhead did SDD add?
- Net: was the investment justified?

What you're learning: For our report, research prevented writing for the wrong audience. The spec prevented scope creep into tutorials. Refinement caught that mid-size CTOs have different needs than enterprise. But a quick blog post wouldn't need this ceremony.

Prompt 2: Classify These Tasks

Apply the decision heuristic to these writing tasks:

1. Fix typo in documentation (one line, obvious fix)
2. Write quarterly report update (unclear what changed, many sections)
3. Update team member bio (one paragraph, clear change)
4. Write product launch announcement (unclear audience, needs research)
5. Add FAQ item (one question, clear answer)

For each: SDD, skip SDD, or lightweight spec?

What you're learning: Quick classification builds intuition. Tasks 1, 3, 5 are skip SDD (obvious, contained). Task 2 needs at least lightweight spec (what changed?). Task 4 needs full SDD (audience research required). Practice makes this instant.

Prompt 3: Lightweight Spec for Borderline Task

Task: "Write a blog post about our AI adoption journey"

This is borderline—not trivial, not massive. Write a lightweight spec:
- Just constraints (what NOT to reveal, what tone)
- Just success criteria (what should readers take away?)

If writing reveals complexity, note what research we'd need.

What you're learning: Lightweight specs are your probe. "Blog post about AI adoption" sounds simple until you write constraints: "Don't reveal vendor pricing" (wait, can we share ROI without that?), "Don't criticize tools we evaluated" (or do we want honest comparisons?). The spec reveals hidden decisions.