What Is Context Engineering?
Two engineers build contract review agents. Same model. Same basic architecture. One sells for $2,000/month. The other can't give it away.
What's different?
The answer: context quality.
In Chapter 1, you learned that Digital FTEs are AI agents that work 24/7, delivering consistent results at a fraction of human cost. But here's the uncomfortable truth: those same AI models are available to everyone. Your competitors have access to Claude, GPT, and Gemini too. They can spin up the same frontier model in minutes.
The model isn't your moat. Context engineering is.
If you've used AI for real work, you've experienced the breakdown. Your AI followed instructions brilliantly for the first twenty minutes. Then it started ignoring conventions, repeating mistakes you already corrected, producing wildly different outputs for similar inputs. The AI didn't get dumber. Its context got corrupted.
This chapter teaches you the quality control discipline that separates sellable Digital FTEs from expensive toys.
The Definition
Anthropic defines context engineering as:
"The art and science of curating what will go into the limited context window from that constantly evolving universe of possible information."
The guiding principle: find the smallest set of high-signal tokens that maximize the likelihood of some desired outcome.
Your prompt is what you say. Your context is everything the AI already knows when you say it. Context engineering is controlling that "already knows" part.
Five Terms You Need
| Term | Definition |
|---|---|
| Token | The unit AI models use to measure text. Roughly 3/4 of a word. "Context engineering" = 2 tokens. |
| Context | Everything the model processes when generating a response: system prompts, instructions, conversation history, file contents, tool outputs. |
| Context window | Maximum tokens the model can "see" at once (200,000 for Claude). |
| Context engineering | The discipline of designing what goes into that window, where it's positioned, and when it loads. |
| Context rot | When accumulated conversation degrades output quality. Old errors and abandoned approaches compete with current instructions. |
Why Context Beats Prompts
"Prompt engineering" was the 2023 discipline. It has a ceiling.
| Prompts | Context | |
|---|---|---|
| Token budget | 50-200 tokens | 200,000+ tokens |
| Your control | What you type | What you engineer |
| Impact on output | 0.1% | 99.9% |
Your prompt is 0.1% of what the model processes. The other 99.9% is context. If you're optimizing prompts while ignoring context, you're polishing the doorknob while the house is on fire.
This matters for your Digital FTEs. A legal assistant Digital FTE with perfect prompts but corrupted context will hallucinate case citations. A sales Digital FTE with perfect prompts but bloated context will forget customer preferences mid-conversation. The context is what makes the difference between a $50/month chatbot and a $5,000/month professional assistant.
The Four Types of Context Rot
Not all context degradation is equal. Recognizing the pattern helps you respond effectively.
1. Poisoning: Outdated Information Persists
You renamed something, changed a decision, or updated terminology. But 40 messages ago, you discussed the old version extensively. That discussion is still in context. Claude might reference the outdated information, creating confusion or errors.
Symptom: Claude uses terminology, patterns, or references that were correct earlier but aren't anymore.
2. Distraction: Irrelevant Content Dilutes Attention
You spent 20 messages on a tangent. Now you're working on something different. That tangent is still consuming attention budget—attention that could be allocated to your current constraints.
Symptom: Claude's responses feel less focused, miss details, or include tangential considerations.
3. Confusion: Similar Concepts Conflate
You're working with two similar things—maybe two services, two documents, or two processes. They have similar names or overlapping terminology. Claude starts conflating them—using the wrong one in the wrong context.
Symptom: Claude mixes up similar-sounding concepts, uses wrong terminology, or applies patterns from one domain to another.
4. Clash: Contradictory Instructions Compete
Early in the session, you said one thing. Later, you said something different. Both instructions are in context. Claude has to reconcile them and might choose wrong.
Symptom: Claude's decisions seem inconsistent, or it asks clarifying questions you thought you'd already answered.
Automatic Context Management
Claude Code handles context automatically through a feature called autocompact. When your context window fills up, Claude Code summarizes the conversation, keeps key decisions, and forgets noise—without you doing anything.
Most of the time, this works well. Lesson 6 teaches when you need to manually intervene with /compact or /clear for situations where automatic management isn't enough.
Lab: See Your Context
Objective: See what's consuming your context window right now.
Task 1: Run the Context Command
In Claude Code, run:
/context
You'll see output showing:
- System prompt: Claude's base instructions (fixed cost)
- MCP tools: External integrations (each adds cost)
- Memory files: Your CLAUDE.md + rules (you control this)
- Messages: Conversation history (grows every turn)
- Free space: Remaining budget for actual work
What to observe: Much of your context is consumed before you type anything. That's baseline cost. Context engineering is managing these numbers so you have room for the work that matters.
Task 2: Identify Potential Rot
Think about your current or most recent working session with Claude. Ask yourself:
- Did you change direction or rename anything mid-session? (Potential poisoning)
- Did you go on tangents unrelated to your current task? (Potential distraction)
- Are you working with similar-sounding concepts or files? (Potential confusion)
- Did you give different instructions at different times? (Potential clash)
If you identified any of these, you've diagnosed context rot. Later lessons teach how to treat each type.
Try With AI
Prompt 1: Context Inventory
List everything currently in your context.
Estimate what percentage is:
(1) directly relevant to my next task,
(2) useful background,
(3) noise that dilutes attention.
What you're learning: Before you can engineer context, you need to see what's actually there. This prompt develops awareness of context state.
Prompt 2: Rot Diagnosis
Based on our conversation history, identify any signs of context rot:
- Poisoning (outdated information I've since changed)
- Distraction (tangents no longer relevant)
- Confusion (similar concepts that might be conflating)
- Clash (contradictory instructions I've given)
Be specific about what you find.
What you're learning: Diagnosis comes before treatment. This prompt helps you identify which rot type (if any) is affecting your current session, so you can apply the right fix.
Safety note: When running context diagnostics, you're examining the session state, not changing it. This is observational—safe to run at any time.