SmartNotes Boundary TDG
James looks at the pieces he has built across four lessons: NoteCreate(BaseModel) with Field constraints, model_validate_json() for parsing, and the boundary pattern with @dataclass Note inside. Each piece worked in isolation. Now he assembles them.
"This is the TDG pattern from Chapter 46," Emma says. "Define the types, write the tests, generate the implementation. Except this time, you are not generating a single function. You are building a boundary layer."
She pauses. "Think of it like a checkpoint at the border. Everything that enters gets inspected. Once inside, it moves freely."
This lesson brings together everything from Chapter 55 into one working system. You will not learn new syntax. Instead, you will practice assembling known pieces into a complete solution and testing every path through it. This is what real programming looks like: combining tools you already understand.
The Complete Boundary Layer
Here are all three pieces together. Read through them before writing any code:
The Pydantic model (boundary):
from pydantic import BaseModel, Field
class NoteCreate(BaseModel):
title: str = Field(min_length=1, max_length=200)
body: str = Field(min_length=1)
word_count: int = Field(ge=0)
author: str
tags: list[str] = []
is_draft: bool = False
The dataclass (internal):
from dataclasses import dataclass, field
@dataclass
class Note:
title: str
body: str
word_count: int
author: str
tags: list[str] = field(default_factory=list)
is_draft: bool = False
The conversion function:
def to_note(validated: NoteCreate) -> Note:
"""Convert a validated NoteCreate to an internal Note."""
return Note(
title=validated.title,
body=validated.body,
word_count=validated.word_count,
author=validated.author,
tags=list(validated.tags),
is_draft=validated.is_draft,
)
The loading function:
from pathlib import Path
def load_and_validate_note(json_string: str) -> Note:
"""Parse JSON, validate with Pydantic, convert to internal Note."""
validated: NoteCreate = NoteCreate.model_validate_json(json_string)
return to_note(validated)
Four pieces, each with a single responsibility:
NoteCreatevalidates incoming dataNoterepresents clean data inside the appto_noteconverts between the twoload_and_validate_noteties the pipeline together
The Full Test Suite
Each test targets one category of failure. Together, they cover every path through the boundary layer:
import pytest
from pydantic import ValidationError
# ── Valid Input ──────────────────────────────────────────────
VALID_JSON: str = """{
"title": "Test Note",
"body": "This is a valid note body.",
"word_count": 42,
"author": "James",
"tags": ["python", "testing"],
"is_draft": false
}"""
def test_valid_json_produces_note() -> None:
note: Note = load_and_validate_note(VALID_JSON)
assert note.title == "Test Note"
assert note.word_count == 42
assert note.tags == ["python", "testing"]
assert note.is_draft is False
def test_valid_json_with_defaults() -> None:
minimal_json: str = """{
"title": "Minimal",
"body": "Just enough.",
"word_count": 5,
"author": "Emma"
}"""
note: Note = load_and_validate_note(minimal_json)
assert note.tags == []
assert note.is_draft is False
# ── Invalid Types ────────────────────────────────────────────
def test_wrong_type_title_rejected() -> None:
bad_json: str = """{
"title": 999,
"body": "Valid body",
"word_count": 10,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)
def test_wrong_type_word_count_rejected() -> None:
bad_json: str = """{
"title": "Valid Title",
"body": "Valid body",
"word_count": "many",
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)
# ── Constraint Violations ───────────────────────────────────
def test_empty_title_rejected() -> None:
bad_json: str = """{
"title": "",
"body": "Valid body",
"word_count": 10,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)
def test_title_too_long_rejected() -> None:
long_title: str = "A" * 201
bad_json: str = f'{{"title": "{long_title}", "body": "Valid", "word_count": 10, "author": "J"}}'
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)
def test_negative_word_count_rejected() -> None:
bad_json: str = """{
"title": "Valid Title",
"body": "Valid body",
"word_count": -1,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)
def test_empty_body_rejected() -> None:
bad_json: str = """{
"title": "Valid Title",
"body": "",
"word_count": 10,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)
# ── Missing Fields ──────────────────────────────────────────
def test_missing_required_field_rejected() -> None:
bad_json: str = """{
"body": "No title here",
"word_count": 10,
"author": "James"
}"""
with pytest.raises(ValidationError):
load_and_validate_note(bad_json)
# ── Malformed JSON ──────────────────────────────────────────
def test_malformed_json_rejected() -> None:
broken_json: str = '{title: "no quotes on key"}'
with pytest.raises(ValidationError):
load_and_validate_note(broken_json)
Ten tests. Each one creates a specific kind of bad input and verifies that the boundary layer catches it. Compare this to the twelve tests from Chapter 54's manual validation: similar coverage, but the tests are simpler because Pydantic handles the detection.
Running the Tests
Put everything in a single file called test_boundary.py (or split the models into models.py and import them). Then run:
uv run pytest test_boundary.py -v
You should see all ten tests pass:
test_boundary.py::test_valid_json_produces_note PASSED
test_boundary.py::test_valid_json_with_defaults PASSED
test_boundary.py::test_wrong_type_title_rejected PASSED
test_boundary.py::test_wrong_type_word_count_rejected PASSED
test_boundary.py::test_empty_title_rejected PASSED
test_boundary.py::test_title_too_long_rejected PASSED
test_boundary.py::test_negative_word_count_rejected PASSED
test_boundary.py::test_empty_body_rejected PASSED
test_boundary.py::test_missing_required_field_rejected PASSED
test_boundary.py::test_malformed_json_rejected PASSED
Every test category is covered: valid input, wrong types, constraint violations, missing fields, malformed JSON. If you add a new field to NoteCreate with a constraint, you add one test for the constraint. The boundary layer scales cleanly.
PRIMM-AI+ Practice: Full Boundary
Predict [AI-FREE]
Look at this JSON input without running it. Predict whether load_and_validate_note returns a Note or raises ValidationError. Write your prediction and a confidence score from 1 to 5 before checking.
json_input: str = """{
"title": "A",
"body": "B",
"word_count": 0,
"author": "C",
"tags": [],
"is_draft": true
}"""
result = load_and_validate_note(json_input)
Check your prediction
Returns a valid Note. Every field passes:
titleis"A": 1 character meetsmin_length=1, undermax_length=200bodyis"B": 1 character meetsmin_length=1word_countis0: meetsge=0(greater than or equal to zero)authoris"C": no constraints beyond typetagsis[]: empty list is valid (no min_length on the list itself)is_draftistrue: valid boolean
The tricky part: single-character strings are valid because min_length=1 means "at least 1", not "at least something meaningful." Content quality is a business concern, not a type concern.
Run
Create the complete boundary layer in a file. Run uv run pytest -v to verify all ten tests pass. Then add one more test: what happens when tags contains a non-string element like "tags": [1, 2, 3]? Predict whether Pydantic rejects it or coerces the integers to strings.
Investigate
In the test_malformed_json_rejected test, change the broken JSON to "" (empty string). Does Pydantic still raise ValidationError? What about "null"? Try each one and read the error messages. Understanding what Pydantic considers "malformed" helps you write better tests.
Modify
Add a seventh field to both NoteCreate and Note: priority: int = Field(ge=1, le=5) with a default of 3. Update the to_note function. Write two new tests: one that uses the default priority, and one that rejects priority=0.
Make [Mastery Gate]
Without looking at any examples, build a complete boundary layer for a different domain:
- A
BaseModelcalledProductInputwithname: str = Field(min_length=1),price: float = Field(gt=0), andquantity: int = Field(ge=0) - A
@dataclasscalledProductwith the same three fields - A
to_product(validated: ProductInput) -> Productconversion function - A
load_and_validate_product(json_string: str) -> Productloading function - Six tests: valid input, wrong type, empty name, zero price, negative quantity, malformed JSON
Run uv run pytest to verify all six tests pass.
Try With AI
If Claude Code is not already running, open your terminal, navigate to your SmartNotes project folder, and type claude. If you need a refresher, Chapter 44 covers the setup.
Prompt 1: Review Your Boundary Layer
Here is my SmartNotes boundary layer. Review it for
completeness. Are there any validation paths I am
not testing? Any edge cases I should add?
[Paste your complete test_boundary.py file here]
Read the AI's suggestions. It might recommend testing boundary values (title with exactly 200 characters), testing with extra unknown fields in the JSON, or testing with null values. Add any tests that seem valuable.
What you're learning: You are using the AI as a code reviewer to find gaps in your test coverage.
Prompt 2: Generate a New Boundary Layer
Write a complete boundary layer for a "Task" domain
with these fields:
- description: str, at least 1 character
- priority: int, between 1 and 5 inclusive
- completed: bool, default False
Include a Pydantic BaseModel, a @dataclass, a
conversion function, a JSON loading function, and
tests for valid input, wrong types, and constraint
violations. Use type annotations everywhere.
Compare the AI's output to what you wrote in the Mastery Gate. Check: does it follow the same boundary pattern? Does it test the same categories (valid, wrong types, constraints, missing, malformed)?
What you're learning: You are evaluating AI-generated architecture against a pattern you already understand.
You have built a boundary layer that validates external data and converts it to internal types. In Phase 4, you will learn debugging tools that help you trace validation failures in larger programs, and you will write independent TDG cycles where you design the entire boundary layer from scratch without guidance.
Key Takeaways
-
The boundary layer has four pieces. A Pydantic model (validates input), a dataclass (holds clean data), a conversion function (bridges the two), and a loading function (ties the pipeline together).
-
Test five categories of failure. Valid input, wrong types, constraint violations, missing fields, and malformed JSON. Each category catches a different kind of problem that real data produces.
-
model_validate_jsonreplacesjson.loadsplus manual validation. One method call parses, validates, and returns a model instance. Invalid data raisesValidationError. -
The boundary pattern scales cleanly. Adding a field means: add it to NoteCreate with a constraint, add it to Note, update
to_note, and write one or two tests. No 30-line validation functions to maintain. -
Validate once at the edge, trust everywhere inside. Once data passes through the Pydantic model and converts to a dataclass, every internal function can trust its inputs. No redundant checking needed.
Looking Ahead
You have completed Chapter 55. You can now define Pydantic models, enforce constraints, serialize and parse data, and apply the boundary pattern. The manual validation pain from Chapter 54 is behind you. Next, you will take the Chapter 55 Quiz to test your understanding across all five lessons. After that, Phase 4 introduces debugging and independent TDG design, building on the validation foundation you have built here.