Skip to main content

Chapter 53: Iterating on AI Output

James types a prompt into Claude Code. He asks it to build a search_notes function for SmartNotes: given a list of Note objects and a search term, return every note whose title or body contains that term. He has already written six tests covering exact matches, partial matches, case sensitivity, empty lists, missing terms, and notes with no body text.

He runs uv run pytest. Four of six tests pass. Two fail.

He reaches for the keyboard, ready to paste the entire error output back into Claude Code with a quick "fix it." Emma stops him.

"Read the diff before you re-prompt."

James frowns. "What diff?"

"The difference between what you asked for and what it gave you. If you just say 'fix it,' the AI has to guess what went wrong. If you say 'the case-insensitive match on line 12 is comparing against note.title but not note.body,' the AI knows exactly where to look." She taps the screen. "Iteration is not about sending more messages. It is about sending better ones."

This chapter teaches the complete prompt-test-revise cycle. In Chapter 46 Lesson 4, you did a single re-prompt: one error, one fix, one round. That was training wheels. Real AI collaboration rarely finishes in one pass. You will write your own prompts from scratch, run multiple rounds of revision, read diffs to understand what changed, and learn when typing the fix yourself is faster than asking the AI again.

No new Python syntax appears in this chapter. Every language feature you need (dataclasses, pytest, control flow, type annotations) comes from Chapters 47 through 52. This chapter is about process: the workflow you follow when the first draft of AI-generated code is not quite right.

What You Will Learn

By the end of this chapter, you will be able to:

  • Write an original prompt that specifies function behavior clearly enough for the AI to produce testable code
  • Run a multi-round iteration cycle where each re-prompt targets a specific failing test
  • Read git diff output to understand what the AI changed between rounds
  • Apply the 30% heuristic to decide when to fix manually, re-prompt, or start over
  • Execute a complete 3-round feedback loop on a SmartNotes feature

Chapter Lessons

LessonTitleWhat You DoDuration
1From Vague Prompt to Working CodeWrite your first original prompt, refine from vague to specific25 min
2Multi-Round IterationHandle fix-introduces-bug scenarios across 2-3 rounds25 min
3Reading Diffs and the Judgment CallRead git diff output, apply the 30% heuristic20 min
4The Complete Feedback LoopFull 3-round capstone on SmartNotes search_notes20 min
5Chapter 53 QuizScenario-based questions on the iteration workflow15 min

PRIMM-AI+ in This Chapter

Every lesson includes a PRIMM-AI+ Practice section following the five-stage cycle from Chapter 42. This chapter applies PRIMM-AI+ to process rather than syntax: you predict what the AI will produce, run the generation, investigate the gap between expected and actual output, modify your prompt, and make the final version yourself.

StageWhat You DoWhat It Builds
Predict [AI-FREE]Predict how the AI will interpret your prompt before running it, with a confidence score (1-5)Calibrates your prompt intuition
RunExecute the prompt and run pytest, compare output to predictionCreates the feedback loop
InvestigateIdentify which tests fail and classify errors using the Error Taxonomy from Chapter 43Makes your diagnostic reasoning visible
ModifyRewrite the prompt with a specific fix, then predict the new resultTests whether your prompt refinement transfers
Make [Mastery Gate]Complete a full iteration cycle from scratch without guidanceProves you can run the process independently

Syntax Card: Chapter 53

No new Python syntax. This card covers the iteration process itself.

# The Iteration Loop
# ──────────────────────────────────────────────────────
# Step 1: Write a prompt (specify types, behavior, edge cases)
# Step 2: Run pytest
# Step 3: Read failures (which tests, what error category?)
# Step 4: Read the diff (what did the AI change?)
# Step 5: Decide: re-prompt with specifics / fix manually / start over

# Prompt Refinement Levels
# ──────────────────────────────────────────────────────
# Level 0 (vague): "Write a search function"
# Level 1 (typed): "Write search_notes(notes: list[Note], term: str) -> list[Note]"
# Level 2 (behavior): "...case-insensitive match on title and body"
# Level 3 (edge): "...return empty list when no matches, handle empty string term"

# Diff Reading
# ──────────────────────────────────────────────────────
# + lines = added by this change
# - lines = removed by this change
# @@ -12,4 +12,6 @@ = location header (old line 12, new line 12)
# Unchanged lines = context (no prefix)

# The 30% Heuristic
# ──────────────────────────────────────────────────────
# < 30% of code needs change -> fix manually (faster than re-prompting)
# 30-70% needs change -> re-prompt with specific instructions
# > 70% is wrong -> start over with a better prompt

Prerequisites

Before starting this chapter, you should be able to:

  • Complete a single re-prompt cycle with AI-generated code (Chapter 46 Lesson 4)
  • Write if/elif/else branches and for loops (Chapter 50)
  • Define @dataclass types with typed fields (Chapter 51)
  • Write and run pytest tests, including parametrized tests (Chapter 52)
  • Read pytest failure output and classify errors (Chapter 43 Error Taxonomy)

The SmartNotes Connection

This chapter builds a search_notes function through three rounds of iteration. The function uses the Note dataclass from Chapter 51:

from dataclasses import dataclass

@dataclass
class Note:
title: str
body: str
tags: list[str]
word_count: int

Across the four lessons, you will:

  • Round 1: Prompt the AI to generate search_notes. Some tests pass, some fail.
  • Round 2: Re-prompt targeting specific failures. The fix introduces a new bug.
  • Round 3: Read the diff, decide whether to re-prompt or fix manually, and get all tests green.

By the end, you will have a working search_notes and, more importantly, a repeatable process for getting any AI-generated function from "mostly right" to "fully correct."