مرکزی مواد پر جائیں

SmartNotes Boundary TDG

James looks at the pieces he has built across four lessons: NoteCreate(BaseModel) with Field constraints, model_validate_json() for parsing, and the boundary pattern with @dataclass Note inside. Each piece worked in isolation. Now he assembles them.

"This is the TDG pattern from Chapter 46," Emma says. "Define the types, write the tests, generate the implementation. Except this time, you are not generating a single function. You are building a boundary layer."

She pauses. "Think of it like a checkpoint at the border. Everything that enters gets inspected. Once inside, it moves freely."

If you're new to programming

This lesson brings together everything from Chapter 55 into one working system. You will not learn new syntax. Instead, you will practice assembling known pieces into a complete solution and testing every path through it. This is what real programming looks like: combining tools you already understand.


The Complete Boundary Layer

Here are all three pieces together. Read through them before writing any code:

The Pydantic model (boundary):

from pydantic import BaseModel, Field


class NoteCreate(BaseModel):
title: str = Field(min_length=1, max_length=200)
body: str = Field(min_length=1)
word_count: int = Field(ge=0)
author: str
tags: list[str] = []
is_draft: bool = False

The dataclass (internal):

from dataclasses import dataclass, field


@dataclass
class Note:
title: str
body: str
word_count: int
author: str
tags: list[str] = field(default_factory=list)
is_draft: bool = False

The conversion function:

def to_note(validated: NoteCreate) -> Note:
"""Convert a validated NoteCreate to an internal Note."""
return Note(
title=validated.title,
body=validated.body,
word_count=validated.word_count,
author=validated.author,
tags=list(validated.tags),
is_draft=validated.is_draft,
)

The loading function:

from pathlib import Path


def load_and_validate_note(json_string: str) -> Note:
"""Parse JSON, validate with Pydantic, convert to internal Note."""
validated: NoteCreate = NoteCreate.model_validate_json(json_string)
return to_note(validated)

Four pieces, each with a single responsibility:

  • NoteCreate validates incoming data
  • Note represents clean data inside the app
  • to_note converts between the two
  • load_and_validate_note ties the pipeline together

The Full Test Suite

Each test targets one category of failure. Together, they cover every path through the boundary layer:

import pytest
from pydantic import ValidationError


# ── Valid Input ──────────────────────────────────────────────

VALID_JSON: str = """{
"title": "Test Note",
"body": "This is a valid note body.",
"word_count": 42,
"author": "James",
"tags": ["python", "testing"],
"is_draft": false
}"""


def test_valid_json_produces_note() -> None:
note: Note = load_and_validate_note(VALID_JSON)
assert note.title == "Test Note"
assert note.word_count == 42
assert note.tags == ["python", "testing"]
assert note.is_draft is False


def test_valid_json_with_defaults() -> None:
minimal_json: str = """{
"title": "Minimal",
"body": "Just enough.",
"word_count": 5,
"author": "Emma"
}"""
note: Note = load_and_validate_note(minimal_json)
assert note.tags == []
assert note.is_draft is False


# ── Invalid Types ────────────────────────────────────────────

def test_wrong_type_title_rejected() -> None:
bad_json: str = """{
"title": 999,
"body": "Valid body",
"word_count": 10,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)


def test_wrong_type_word_count_rejected() -> None:
bad_json: str = """{
"title": "Valid Title",
"body": "Valid body",
"word_count": "many",
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)


# ── Constraint Violations ───────────────────────────────────

def test_empty_title_rejected() -> None:
bad_json: str = """{
"title": "",
"body": "Valid body",
"word_count": 10,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)


def test_title_too_long_rejected() -> None:
long_title: str = "A" * 201
bad_json: str = f'{{"title": "{long_title}", "body": "Valid", "word_count": 10, "author": "J"}}'
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)


def test_negative_word_count_rejected() -> None:
bad_json: str = """{
"title": "Valid Title",
"body": "Valid body",
"word_count": -1,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)


def test_empty_body_rejected() -> None:
bad_json: str = """{
"title": "Valid Title",
"body": "",
"word_count": 10,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)


# ── Missing Fields ──────────────────────────────────────────

def test_missing_required_field_rejected() -> None:
bad_json: str = """{
"body": "No title here",
"word_count": 10,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)


# ── Malformed JSON ──────────────────────────────────────────

def test_malformed_json_rejected() -> None:
broken_json: str = '{title: "no quotes on key"}'
with pytest.raises(ValidationError):
load_and_validate_note(broken_json)

Ten tests. Each one creates a specific kind of bad input and verifies that the boundary layer catches it. Compare this to the twelve tests from Chapter 54's manual validation: similar coverage, but the tests are simpler because Pydantic handles the detection.


Running the Tests

Put everything in a single file called test_boundary.py (or split the models into models.py and import them). Then run:

uv run pytest test_boundary.py -v

You should see all ten tests pass:

test_boundary.py::test_valid_json_produces_note PASSED
test_boundary.py::test_valid_json_with_defaults PASSED
test_boundary.py::test_wrong_type_title_rejected PASSED
test_boundary.py::test_wrong_type_word_count_rejected PASSED
test_boundary.py::test_empty_title_rejected PASSED
test_boundary.py::test_title_too_long_rejected PASSED
test_boundary.py::test_negative_word_count_rejected PASSED
test_boundary.py::test_empty_body_rejected PASSED
test_boundary.py::test_missing_required_field_rejected PASSED
test_boundary.py::test_malformed_json_rejected PASSED

Every test category is covered: valid input, wrong types, constraint violations, missing fields, malformed JSON. If you add a new field to NoteCreate with a constraint, you add one test for the constraint. The boundary layer scales cleanly.


PRIMM-AI+ Practice: Full Boundary

Predict [AI-FREE]

Press Shift+Tab to enter Plan Mode before predicting.

Look at this JSON input without running it. Predict whether load_and_validate_note returns a Note or raises ValidationError. Write your prediction and a confidence score from 1 to 5 before checking.

json_input: str = """{
"title": "A",
"body": "B",
"word_count": 0,
"author": "C",
"tags": [],
"is_draft": true
}"""

result = load_and_validate_note(json_input)
Check your prediction

Returns a valid Note. Every field passes:

  • title is "A": 1 character meets min_length=1, under max_length=200
  • body is "B": 1 character meets min_length=1
  • word_count is 0: meets ge=0 (greater than or equal to zero)
  • author is "C": no constraints beyond type
  • tags is []: empty list is valid (no min_length on the list itself)
  • is_draft is true: valid boolean

The tricky part: single-character strings are valid because min_length=1 means "at least 1", not "at least something meaningful." Content quality is a business concern, not a type concern.

Run

Press Shift+Tab to exit Plan Mode.

Create the complete boundary layer in a file. Run uv run pytest -v to verify all ten tests pass. Then add one more test: what happens when tags contains a non-string element like "tags": [1, 2, 3]? Predict whether Pydantic rejects it or coerces the integers to strings.

Investigate

In Claude Code, type /investigate and ask about what Pydantic considers "malformed" JSON. In the test_malformed_json_rejected test, change the broken JSON to "" (empty string). Does Pydantic still raise ValidationError? What about "null"? Try each one and read the error messages.

Modify

Add a seventh field to both NoteCreate and Note: priority: int = Field(ge=1, le=5) with a default of 3. Update the to_note function. Write two new tests: one that uses the default priority, and one that rejects priority=0.

Make [Mastery Gate]

Without looking at any examples, use /tdg in Claude Code to scaffold a TDG cycle and build a complete boundary layer for a different domain:

  1. A BaseModel called ProductInput with name: str = Field(min_length=1), price: float = Field(gt=0), and quantity: int = Field(ge=0)
  2. A @dataclass called Product with the same three fields
  3. A to_product(validated: ProductInput) -> Product conversion function
  4. A load_and_validate_product(json_string: str) -> Product loading function
  5. Six tests: valid input, wrong type, empty name, zero price, negative quantity, malformed JSON

Run uv run pytest to verify all six tests pass.


Try With AI

Opening Claude Code

If Claude Code is not already running, open your terminal, navigate to your SmartNotes project folder, and type claude. If you need a refresher, Chapter 44 covers the setup.

Prompt 1: Review Your Boundary Layer

Here is my SmartNotes boundary layer. Review it for
completeness. Are there any validation paths I am
not testing? Any edge cases I should add?

[Paste your complete test_boundary.py file here]

Read the AI's suggestions. It might recommend testing boundary values (title with exactly 200 characters), testing with extra unknown fields in the JSON, or testing with null values. Add any tests that seem valuable.

What you're learning: You are using the AI as a code reviewer to find gaps in your test coverage.

Prompt 2: Generate a New Boundary Layer

Write a complete boundary layer for a "Task" domain
with these fields:
- description: str, at least 1 character
- priority: int, between 1 and 5 inclusive
- completed: bool, default False

Include a Pydantic BaseModel, a @dataclass, a
conversion function, a JSON loading function, and
tests for valid input, wrong types, and constraint
violations. Use type annotations everywhere.

Compare the AI's output to what you wrote in the Mastery Gate. Check: does it follow the same boundary pattern? Does it test the same categories (valid, wrong types, constraints, missing, malformed)?

What you're learning: You are evaluating AI-generated architecture against a pattern you already understand.

Prompt 3: Compare to Manual Validation

In Chapter 54, I wrote a validate_note_data function
with isinstance checks and raise statements. Now I have
a Pydantic boundary layer that does the same job. Show
me a side-by-side comparison of how each approach
handles three cases: wrong type, empty string, and
missing field. Which approach gives better error messages?

Read the AI's comparison. Does it accurately represent both approaches? The manual version stops at the first error; the Pydantic version reports all errors at once. Check whether the AI identifies this difference. Reflect on which error messages would be more helpful when debugging real data problems.

What you're learning: You are consolidating your understanding of both validation approaches by comparing their behavior on identical inputs.


Phase 4 Preview

You have built a boundary layer that validates external data and converts it to internal types. In Phase 4, you will learn debugging tools that help you trace validation failures in larger programs, and you will write independent TDG cycles where you design the entire boundary layer from scratch without guidance.



James closed his test file and leaned back. "Four pieces: Pydantic model at the gate, dataclass inside the warehouse, a conversion step between them, and a loading dock that ties it all together. Five categories of bad shipments to test against. The boundary scales because adding a new product line is four changes, not a thirty-line rewrite."

"That's Phase 3 complete," Emma said. "You can read Python, write tests, iterate with AI, handle errors, and validate data with Pydantic. The manual pain from Chapter 54 is behind you."

"It feels like the warehouse finally has a proper receiving system. Barcodes instead of clipboards, automated inspection instead of manual spot checks, clean inventory on the shelves."

Emma nodded. "There's one thing I'm not sure about, though. I keep telling students that five test categories are enough: valid input, wrong types, constraints, missing fields, malformed JSON. But I wonder if there's a sixth category I'm missing. Edge cases in the conversion step, maybe. Data that passes Pydantic but breaks during the dataclass conversion."

"Something to watch for," James said. "So what's next? I've got the tools, I've got the process. Where does it go from here?"

"Phase 4. You'll learn debugging tools for when the error message alone isn't enough, and you'll run full TDG cycles independently, from spec to tests to working code, without me guiding each step. The training wheels come off."