Skip to main content

Common AI Failure Patterns

In Lesson 1, James learned to read tracebacks bottom-up. In Lesson 2, he learned to print-debug silent bugs. Both skills answered the question: "I have a bug. How do I find it?" This lesson flips the question: "Can I spot the bug before I even run the code?"

Emma opens smartnotes_buggy.py and shows James five functions. "Four of these have bugs. The bugs are different, but they all follow patterns I have seen AI make hundreds of times. I want you to spot them without running the code."

James frowns. "How? The bugs could be anything."

"They could, but they aren't. AI makes predictable mistakes. Think about your distribution center: when packages arrive damaged, it's not random. Crushed boxes, mislabeled items, wrong quantities -- you learned to spot those patterns at a glance instead of opening every package. AI code is the same. Learn the five patterns, and you catch most bugs on first read."


Pattern 1: Off-by-One Errors in Boundaries

Look at this SmartNotes function:

def categorize_note(note: Note) -> str:
"""Return 'long' if word_count is 200 or more, else 'short'.

The boundary is 200: notes with exactly 200 words are 'long'.
"""
if note.word_count > 200:
return "long"
return "short"
Spot the bug

The docstring says "200 or more" makes a note long. But > 200 excludes exactly 200. A note with 200 words returns "short" instead of "long".

The fix: Change > to >=:

if note.word_count >= 200:

Error Taxonomy category: Specification Error. The code runs without crashing, and the logic is internally consistent. The problem is that the code does not match what the docstring specifies.

How to spot it on first read: Compare the operator (>, >=, <, <=) against the docstring. If the docstring says "or equal," the operator must include =. Test at the exact boundary value: what happens at 200, not 199 or 201?


Pattern 2: Wrong Operator

You already found this pattern in Lesson 2. The reading_time_minutes function used // (floor division) instead of / (true division). Here is a second example from smartnotes_buggy.py:

def filter_notes_by_all_tags(
notes: list[Note],
required_tags: list[str],
) -> list[Note]:
"""Return notes that contain ALL of the given tags."""
result: list[Note] = []
for note in notes:
if any(tag in note.tags for tag in required_tags):
result.append(note)
return result
Spot the bug

The docstring says "ALL of the given tags." But the code uses any(), which returns True if the note has at least one matching tag. A note with only one of three required tags would still be included.

The fix: Change any() to all():

if all(tag in note.tags for tag in required_tags):

Error Taxonomy category: Specification Error. The code is valid Python. It just does not match what the function promises.

How to spot it on first read: Check that the operator matches the docstring's intent. Key pairs to watch: // vs /, any() vs all(), == vs is, and vs or. If the docstring says "all," the code should use all().


Pattern 3: Missing Edge Cases

Here is average_word_count from smartnotes_buggy.py:

def average_word_count(notes: list[Note]) -> float:
"""Return the average word count across all notes.

Returns 0.0 if the list is empty.
"""
total: int = 0
for note in notes:
total += note.word_count
return total / len(notes)
Spot the bug

The docstring says "Returns 0.0 if the list is empty." But the code never checks for an empty list. When notes is [], len(notes) is 0, and total / 0 raises ZeroDivisionError.

The fix: Add an early return:

def average_word_count(notes: list[Note]) -> float:
"""Return the average word count across all notes.

Returns 0.0 if the list is empty.
"""
if not notes:
return 0.0
total: int = 0
for note in notes:
total += note.word_count
return total / len(notes)

Error Taxonomy category: Data/Edge-Case Error. The function works for normal inputs but crashes on a valid edge case.

How to spot it on first read: Ask three questions about every function: "What happens with an empty list? What happens with None? What happens with a single element?" If the code does not handle these, it probably should.


Pattern 4: Incorrect Type Narrowing

AI frequently ignores optional parameters:

def find_notes_by_tag(
notes: list[Note],
tag: str | None = None,
) -> list[Note]:
"""Return notes matching the given tag.

If tag is None, return all notes.
"""
return [note for note in notes if tag in note.tags]
Spot the bug

When tag is None, the expression None in note.tags does not crash (Python checks membership for any type), but it returns False for every note. The function returns an empty list instead of all notes.

The fix: Check for None before filtering:

def find_notes_by_tag(
notes: list[Note],
tag: str | None = None,
) -> list[Note]:
"""Return notes matching the given tag.

If tag is None, return all notes.
"""
if tag is None:
return list(notes)
return [note for note in notes if tag in note.tags]

Error Taxonomy category: Data/Edge-Case Error. The function type signature says None is valid, but the body never handles it.

How to spot it on first read: Look for parameters with | None or = None in the type annotation. Then search the function body for if ... is None. If the None check is missing, the function likely mishandles that case.


Pattern 5: Scope Errors Across Branches

def summarize_note(note: Note) -> str:
"""Return a summary string.

If the note is a draft, prefix with '[DRAFT]'.
Otherwise, prefix with the author name.
"""
if note.is_draft:
prefix = "[DRAFT]"
elif note.author != "Anonymous":
prefix = note.author

return f"{prefix}: {note.title} ({note.word_count} words)"
Spot the bug

If note.is_draft is False AND note.author is "Anonymous", neither branch assigns prefix. The return line raises UnboundLocalError: local variable 'prefix' referenced before assignment.

The fix: Add a default assignment or an else branch:

def summarize_note(note: Note) -> str:
"""Return a summary string."""
if note.is_draft:
prefix = "[DRAFT]"
elif note.author != "Anonymous":
prefix = note.author
else:
prefix = "Anonymous"

return f"{prefix}: {note.title} ({note.word_count} words)"

Error Taxonomy category: Logic Error. The variable exists on some code paths but not all, and Python only discovers this at runtime.

How to spot it on first read: When a variable is assigned inside an if or elif block, trace all possible branches. If any branch does not assign that variable, and the variable is used after the block, you have a scope error. Check: "Is every variable assigned on ALL branches, not just some?"


Your Five-Question Checklist

You have now seen five patterns. Each one has a question you can ask on first read. Write these down or type them into a note:

  1. "Did AI use the right operator?" -- Check // vs /, any() vs all(), == vs is, and vs or. Match against the docstring.
  2. "What happens at the exact boundary?" -- Test at 200, not 199 or 201. Check > vs >=, < vs <=.
  3. "What happens with empty input?" -- Test with [], None, "", and single-element inputs.
  4. "Does every optional parameter get a None check?" -- Look for | None or = None in the signature, then find the corresponding if ... is None in the body.
  5. "Is every variable assigned on ALL branches?" -- Trace if/elif/else chains. If any branch skips the assignment, there is a bug.

Five questions. Takes about 60 seconds to run through. Catches the majority of AI-generated bugs before you ever press Enter.


PRIMM-AI+ Practice: Spot the Pattern

Predict [AI-FREE]

Press Shift+Tab to enter Plan Mode.

Look at this SmartNotes function. Without running it, identify: which of the five failure patterns is present, and what is the bug? Write your answer and a confidence score from 1 to 5.

def count_long_notes(notes: list[Note], threshold: int = 100) -> int:
"""Return the number of notes with word_count above the threshold.

Notes with word_count equal to the threshold are NOT counted.
"""
count: int = 0
for note in notes:
if note.word_count > threshold:
count += 1
return count
Check your prediction

This function is correct. The docstring says "above the threshold" and "equal to the threshold are NOT counted," and the code uses > (not >=), which matches.

If you spent time looking for a bug and found none, that is the right answer. Not every function is buggy. The checklist helps you verify correctness too, not just find errors. Running through the five questions and finding no issues is a valid (and fast) outcome.

Run

Press Shift+Tab to exit Plan Mode.

Create a file called pattern_practice.py with the function above. Add test calls:

from smartnotes_buggy import Note

notes = [
Note(title="Short", body="Hi", word_count=50),
Note(title="Boundary", body="At limit", word_count=100),
Note(title="Long", body="Extended note", word_count=150),
]

print(count_long_notes(notes, threshold=100)) # Expected: 1 (only "Long")

Run uv run python pattern_practice.py and confirm the result matches your prediction.

Investigate

In Claude Code, ask:

/investigate @pattern_practice.py

Why does this function use > instead of >=? Is this correct
given the docstring?

Compare the AI's reasoning to your own. The AI should confirm the operator matches the spec.

Modify

Change the docstring to say "equal to or above the threshold ARE counted." Now the code has Pattern 1 (off-by-one). Fix the code to match the new docstring, then add a test for the exact boundary value.

Make [Mastery Gate]

Review these three functions without running them. For each one, name the failure pattern (Pattern 1 through 5) and describe the bug. Use /bug in Claude Code to classify each error before looking at the fix. Then run to verify.

Function A:

def has_required_fields(note: Note) -> bool:
"""Return True if title is non-empty and word_count is positive."""
if note.title:
valid = True
if note.word_count > 0:
valid = True
return valid

Function B:

def format_tags(note: Note, separator: str | None = None) -> str:
"""Return tags joined by separator. Default separator is ', '."""
return separator.join(note.tags)

Function C:

def notes_above_average(notes: list[Note]) -> list[Note]:
"""Return notes with word_count strictly above the average."""
avg: float = sum(n.word_count for n in notes) / len(notes)
return [n for n in notes if n.word_count >= avg]
Answers

Function A: Pattern 5 (Scope Error). If note.title is empty (""), the first if is skipped and valid is never assigned. The second if only runs when word_count > 0. If the title is empty and word_count is 0, valid is never assigned, causing UnboundLocalError.

Function B: Pattern 4 (Incorrect Type Narrowing). When separator is None (the default), None.join(note.tags) raises AttributeError. The function needs if separator is None: separator = ", " before the join.

Function C: Pattern 1 (Off-by-One). The docstring says "strictly above," but the code uses >= instead of >. Notes with exactly the average word count are included when they should not be.

2/3 correct = mastery.


Try With AI

Opening Claude Code

If Claude Code is not already running, open your terminal, navigate to your SmartNotes project folder, and type claude. If you need a refresher, Chapter 44 covers the setup.

Prompt 1: Generate Pattern #2

Write a SmartNotes function called split_notes_by_length that
takes a list of Notes and a threshold, and returns two lists:
short notes and long notes. Introduce a wrong-operator bug
(Pattern 2) where you use the wrong comparison or the wrong
built-in function. Do NOT tell me which operator is wrong.

Read the generated code. Run through your five-question checklist. Can you spot the wrong operator before running the code?

What you're learning: You are practicing pattern recognition on code you have never seen before. The checklist question "Did AI use the right operator?" should lead you directly to the bug.

Prompt 2: The Hardest Pattern

I learned five AI failure patterns: off-by-one, wrong operator,
missing edge case, incorrect type narrowing, and scope errors.
Which of these five is hardest to spot during code review, and
why? Give a concrete example.

Compare the AI's answer to your own experience so far. Which pattern tripped you up the most in the exercises above?

What you're learning: You are building metacognitive awareness of your own debugging strengths and weaknesses. The AI's perspective may highlight a pattern you underestimated.

Prompt 3: Improve the Checklist

Here is my five-question checklist for reviewing AI-generated
Python code:

1. Did AI use the right operator?
2. What happens at the exact boundary?
3. What happens with empty input?
4. Does every optional parameter get a None check?
5. Is every variable assigned on ALL branches?

Are there common AI mistakes I'm missing? Suggest up to two
additional questions and explain why they matter.

Read the suggestions. If any resonate, add them to your checklist. A good checklist grows with experience.

What you're learning: You are treating the AI as a peer reviewer for your debugging process, not just for code. This is the same skill you will use when reviewing AI-generated specifications and designs in later chapters.


James leans back. "Five patterns. It's like quality control at the warehouse. We have categories: crushed packages, mislabeled items, wrong quantities, missing contents, items in the wrong slot. Every new hire memorizes those five because they cover 80% of all defects. Once you know the categories, you can scan a shelf in seconds instead of opening every box."

"That's actually a better way to explain it than how I usually teach this," Emma says. "I tend to list the patterns as abstract categories: off-by-one, wrong operator, missing edge case. Your version is more concrete. 'Crushed package' is immediately visual. I might steal that for my next workshop."

James grins. "Feel free. So where are we now? I can read tracebacks for crashes, print-debug silent bugs, and now scan for common patterns before I even run the code."

"Right. You have three tools: traceback reading, print debugging, and pattern recognition. But you have been using them in isolation. What is the full process when you sit down with a buggy function? Reproduce, isolate, identify, fix, verify -- the complete debugging loop. That's next."