PRIMM-AI+: AI as Your Learning Partner
In the previous lesson, you learned the five stages of PRIMM -- Predict, Run, Investigate, Modify, Make -- and saw how each stage builds a specific thinking skill. You traced through a greeting program, understood why comprehension comes before creation, and discovered the research showing that students who read and predict code before writing it develop stronger programming ability. The framework was designed for classrooms with human teachers guiding the process.
You do not have a classroom teacher. You have an AI coding assistant.
That changes the partner, not the method. PRIMM with an AI coding assistant as your learning partner is called PRIMM-AI -- the same five stages, but AI generates examples for you to predict, executes code for you to compare, answers your investigation questions, and reviews your completed work. That adaptation is powerful. But it has a gap: without structural safeguards, AI makes it easy to skip stages and fake understanding. You can ask AI to explain the code before you predict, request a full solution before you modify, or let it write your Make project while you watch. Nothing in basic PRIMM-AI prevents this.
Here is what PRIMM-AI looks like -- the same five stages, now with an AI partner:
| Stage | You | AI |
|---|---|---|
| Predict | Read the code, write your prediction | Generates code samples at the right difficulty |
| Run | Compare prediction to actual output | Executes the program, shows raw output |
| Investigate | Ask targeted questions, trace variables | Answers questions, generates trace tables |
| Modify | Change the code yourself | Compares your version, suggests alternatives |
| Make | Write a spec, then implement | Reviews your spec and completed code |
This is a solid foundation. But nothing in this table prevents you from asking AI to explain the code during Predict, or to write the full solution during Make. The boundaries are implied, not enforced. That is the gap.
PRIMM-AI+ closes that gap. It keeps everything from PRIMM-AI -- every stage, every AI role, every rule -- and adds nine structural enhancements:
| # | Enhancement | What It Adds |
|---|---|---|
| 1 | AI-Free Checkpoints | Moments where AI is explicitly not allowed -- diagnostic, not punitive |
| 2 | Stage-by-Stage AI Permissions | Exact rules for what AI may and may not do at each stage |
| 3 | Mandatory Trace Artifacts | You must produce something visible (trace table, explanation, or failure note) during Investigate |
| 4 | Mastery Gates | You must earn the right to proceed to the next stage |
| 5 | Verification Ladder | Five steps connecting learning predictions to production observability |
| 6 | Error Taxonomy | Five categories of bugs, so you diagnose before you fix |
| 7 | Confidence Scoring | Rate your certainty 1-5 before each prediction -- reveals false confidence |
| 8 | Classroom and Solo Modes | Same framework works for both -- this book uses solo mode |
| 9 | Chapter-End Rubric | Five-dimension self-assessment at the end of every programming chapter |
You will learn enhancements 1-4 in this lesson (the core mechanics of working with AI at each stage) and enhancements 5-9 across the next two lessons — self-assessment tools and professional connections in Lesson 3, and teaching methods with classroom and solo modes in Lesson 4.
PRIMM-AI+ is tool-agnostic. Claude Code, Cursor, GitHub Copilot, Gemini CLI -- the boundaries work the same way regardless of which AI coding assistant you use. The method is the constant. The AI tool is the variable. This book uses Claude Code as the primary partner because it integrates with the Spec-Driven Development workflow you learned in Chapter 5, but every principle transfers.
AI Roles at Each Stage
James opens his AI coding assistant and types: "Explain this Python program to me." The explanation appears instantly — clean, thorough, correct. He reads it, nods, and moves on.
Emma stops him. "What did you just learn?"
James thinks. "I learned... what the program does?"
"No. You learned what the AI says the program does. You skipped Predict entirely. Your brain did zero work." She closes his AI assistant. "Let's talk about when you're allowed to open this."
Each PRIMM-AI+ stage defines what the AI does, what you do, and -- critically -- what the AI must not do. The "must not" rules exist because AI is eager to help. Helpfulness without boundaries destroys the learning that each stage is designed to produce.
Predict -- AI Generates, You Think
What you do: Study a program and predict its output before anything runs. Write down what you think each line does, what the output will be, what happens with edge cases. Record a confidence score (1-5).
What the AI does: Generates programs at the right difficulty level for your current stage. Provides code for you to analyze.
What the AI must NOT do: Explain the code before you have predicted. If the AI tells you what a program does before you think about it, the Predict stage produces nothing -- you are reading an explanation, not building a mental model. You can ask your AI assistant something like: "Generate a short Python program that uses variables and print. Include type hints. Do not explain the code -- just show it to me." The key instruction is "do not explain" -- that preserves your prediction space.
Run -- AI Executes, You Compare
What you do: Run the program and compare the actual output to your prediction. Where were you right? Where were you wrong? What surprised you? Record the comparison.
What the AI does: Executes the program. Runs it again with different inputs you specify. Shows raw output without interpretation.
What the AI must NOT do: Interpret the results for you. The learning happens in the gap between your prediction and the actual output. If AI fills that gap with an explanation, you skip the comparison step that builds understanding.
Investigate -- AI as Questioning Partner
What you do: Write your own explanation of how the program works first. Then ask specific questions about what you observed. Focus on the parts that surprised you during Run. Probe the mechanics you do not yet understand.
What the AI does: Answers your questions directly. Generates trace tables showing variable values at each step. Suggests investigation questions you might not have thought to ask. You direct the conversation: "Trace through this program and show me the value of each variable after every line. Present it as a table."
What the AI must NOT do: Provide unsolicited explanations. If you ask about line 3, the AI answers about line 3 -- it does not explain the entire program. The investigation is yours to direct.
Critical rule: Verify every AI explanation by running code yourself. AI can be wrong. When the AI says "this line does X," test it. Modify the line and see if the behavior matches the explanation. This verification instinct is the single most important habit PRIMM-AI+ builds.
Modify -- AI as Comparison Partner
What you do: Change the program yourself. Add a feature, fix a limitation, extend the behavior. You write the modification first, then ask for feedback.
What the AI does: After you modify, shows an alternative approach. Compares your version to the original. Explains tradeoffs between approaches. Can provide a minimal hint or point out the specific lines to change if you are stuck -- but not a complete rewrite. You might say: "I rearranged the variables but the output order is wrong. What am I missing?"
What the AI must NOT do: Modify the code for you. The moment AI writes the modification, you are in Make territory without having done the thinking that Modify requires. Your hands produce the change; AI evaluates it afterward.
Make -- AI as Review Partner, Not Ghostwriter
What you do: Build something new from a specification you write. Define what the program should do, then implement it yourself.
What the AI does: Reviews your specification for completeness. Answers specific syntax questions. Reviews your completed code for correctness and style. A typical Make interaction has two parts: first you ask AI to review your spec ("Does this cover all edge cases?"), then after implementing, you ask AI to review your code ("Review for correctness. Do not rewrite -- just point out issues.").
What the AI must NOT do: Write the solution. If AI writes the program and you submit it, you have produced output without learning. The Make stage proves you can apply what you learned in the previous four stages independently.
AI Permissions Table
The table below makes the boundaries concrete. The Right column shows prompts that keep AI as a partner. The Wrong column shows prompts that turn it into a crutch.
| Stage | AI Permission | Right Interaction | Wrong Interaction |
|---|---|---|---|
| Predict | AI may generate the code sample. AI must not reveal the answer or explain the code. | "Generate a short Python program using variables and print. Do not explain the code." | "What will this code print?" |
| Run | AI may execute the program and display output. No restrictions. | "Run this program and show the output." | (No wrong interaction at this stage) |
| Investigate | AI may explain and trace, but only after the learner provides a first explanation. | "What does the + operator do when I use it to join two strings?" (after writing own trace) | "Explain everything about this code." |
| Modify | AI may provide a minimal hint or targeted diff. Not a complete rewrite. | "I am trying to add a second print line but it is not showing. What am I missing?" | "Add a second print line to this program for me." |
| Make | AI may review the specification and completed solution. AI must not write the solution. | "Review my greeting program for correctness. Do not rewrite it." | "Write a program that prints a greeting with a name." |
When you catch yourself about to use a prompt from the Wrong column, pause and rephrase. The Right column prompts produce learning. The Wrong column prompts produce output.
AI-Free Checkpoints
"Close my AI assistant?" James looks alarmed. "But what if I get stuck?"
"Getting stuck is the point," Emma says. "If you can't do it without AI, you haven't learned it yet. The checkpoint shows you where you actually are — not where you think you are."
Throughout Parts 4 and 5, you will occasionally see [AI-FREE] marked in the margin of lessons. When you see this marker, close your AI assistant. Minimize the window, switch to a different tab, put it away. These moments are diagnostic -- they reveal whether you have actually internalized the concept or whether you have been leaning on AI without realizing it.
The rules for AI-free checkpoints are simple:
- Predict is always AI-free. You make your prediction without any AI assistance. This is non-negotiable.
- Make begins AI-free. You write your specification and make your first implementation attempt without AI. Only after that first attempt do you ask AI for review.
- Other stages allow AI after your first attempt. In Investigate, you write your own explanation before asking AI. In Modify, you attempt the change before requesting hints.
There is a large gap between truly understanding something and merely recognizing it when AI explains it. The checkpoints make that gap visible.
Mastery Gates
James finishes reading a program and reaches for the keyboard. "I get it. Let me jump straight to modifying it."
Emma holds up a hand. "Can you explain how the greeting message gets built — not what it prints, but how the pieces connect?"
James hesitates. "It... puts the words together?"
"That's what. How does the + operator join them? Why does the comma appear where it does? What controls the order?" She waits. James cannot answer. "That's why we have gates. You're not ready for Modify yet."
Each stage transition has a formal requirement. You cannot (or rather, should not) move to the next stage until the gate condition is met:
| Transition | Mastery Gate | Why It Exists |
|---|---|---|
| Before Run | Written prediction exists (not just a mental one) | A vague sense of "it probably prints something" is not a prediction. Writing forces commitment. |
| Before Investigate | Comparison of prediction to actual output recorded | Without recording the gap, you lose the learning signal. |
| Before Modify | Can explain how the program works, not just what it does | "It prints a greeting" is what. "It joins two strings with a comma separator using the + operator" is how. |
| Before Make | Written specification exists | Spec-first is not optional. Defining expected behavior before coding is the professional habit PRIMM-AI+ builds. |
These gates feel unnecessary when a lesson is going well. They prove their value when a lesson is not -- when you discover at the Modify gate that you cannot actually explain how the program works, only what it outputs. That discovery saves you from writing confused code in the Make stage.
Mandatory Trace Artifacts
"I think I understand it," James says after reading through a program.
"Show me," Emma replies. "Write it down. A trace table, an explanation in your own words, or even a note saying where you got confused. Something I can look at."
"Why can't I just tell you?"
"Because 'I think I understand' and 'I can prove I understand' are very different things. Your brain is good at feeling confident. Paper is good at exposing the gaps."
Every Investigate stage must produce something visible. A vague sense of "I think I understand it" is not investigation -- it is wishful thinking. PRIMM-AI+ requires you to create at least one of these artifacts before moving to Modify:
- A trace table showing the value of each variable after every line executes
- A plain-English explanation describing how the program works in your own words
- A failure note documenting what you tried to trace and where you got stuck
The third option matters most. If you cannot trace the program or explain it, that is not a sign of failure -- it is a diagnostic signal. A failure note that says "I do not understand why str(score) is needed before joining with +" gives you an exact target for your AI investigation questions. Without the artifact requirement, you would skip past the confusion and carry it silently into Modify.
The trace table from the walkthrough below is an example of a mandatory artifact. The mastery gate for Investigate ("can explain how, not just what") depends on having produced one.
A Complete PRIMM-AI+ Lesson Walkthrough
"Enough rules," James says. "Show me what this actually looks like."
Emma nods. "Fair enough. Let's do a full PRIMM-AI+ cycle together — start to finish, one program, all five stages. You'll see every checkpoint, every gate, every rule in action."
Here is what a single PRIMM-AI+ lesson looks like end-to-end, using a concrete Python program. This example uses only variables and print -- the same building blocks you saw in Lesson 1.
name: str = "Sarah"
subject: str = "Python"
score: int = 95
result: str = name + " scored " + str(score) + " in " + subject
print(result)
print(name + " passed!")
Stage 1: Predict [AI-FREE]
Before running anything, answer these questions on paper or in a note:
- What will the first
printstatement output? Look at howresultis built: it joinsname, the text" scored ", the score converted to text withstr(score)," in ", andsubject. So:Sarah scored 95 in Python. - What will the second
printstatement output? It joinsnamewith" passed!". So:Sarah passed!. - What does
str(score)do? The score is anint(a number). The+operator joins text, not numbers.str(score)converts the number95into the text"95"so it can be joined with the other strings.
Confidence score: Rate yourself 1-5. Write it down next to your prediction.
Mastery gate check: Do you have a written prediction with a confidence score? If yes, proceed to Run.
Stage 2: Run
Execute the program (ask your AI assistant to run it, or run it directly when you have Python set up later). Here is the output:
Sarah scored 95 in Python
Sarah passed!
Compare your predictions. Did you get both lines right? Did you understand why str(score) was needed? If your predictions matched, your mental model is accurate for this pattern. If they diverged, you have specific questions for the next stage.
Mastery gate check: Have you recorded where your prediction matched and where it diverged? If yes, proceed to Investigate.
Stage 3: Investigate
First, write your own explanation of how the program works. Even a rough version counts: "It stores a name, a subject, and a score, then joins them into a sentence and prints it. A second print line prints a shorter message." Only after writing your explanation should you ask AI for deeper investigation.
Now probe the mechanics. Focus on whatever surprised you during Run. Ask your AI assistant targeted questions:
- "Trace through this program and show me the value of each variable after every line." -- The AI returns a trace table. Verify it yourself: after line 4,
resultshould hold"Sarah scored 95 in Python". - "What happens if I remove
str()and writename + " scored " + scoreinstead?" -- Explore the error. Python cannot join a string and an integer with+. Understanding whystr()is needed is the key insight. - "What if
nameis an empty string?" -- Test the edge case. The output would be" scored 95 in Python"-- a sentence with no name but the spaces still appear.
Each question sharpens your understanding of how the program behaves under different conditions. The AI answers; you verify by checking the logic yourself. This verification instinct — which you will see formalized as Rule 2 below — is the most important habit PRIMM-AI+ builds.
Mastery gate check: Can you explain how the program works, not just what it does? Can you describe why str() is needed and what the + operator does with strings? If yes, proceed to Modify.
Stage 4: Modify
Change the program yourself. Two challenges:
Challenge A: Change the format so the output reads Python: Sarah scored 95 instead -- subject first, then name, then score.
Challenge B: Add a third print line that shows just the score by itself: Score: 95.
Attempt both modifications before asking AI for any help. If you get stuck, ask for a hint -- not a solution: "I rearranged the variables but the output order is wrong. What am I missing?"
After you write your modifications, show both versions to your AI assistant and ask it to compare them. The AI might point out a simpler way to build the string -- a learning opportunity, not a failure.
Stage 5: Make [AI-FREE start]
Build something new. Write a specification first -- without AI: "Create a program that stores a person's name, city, and age, then prints a profile line like 'Sarah lives in London, age 25' and a second line that says 'Welcome, Sarah!'"
Mastery gate check: Do you have a written specification? If yes, implement it.
Attempt the implementation yourself. You will need three variables, str() to convert the age, and two print statements. When you finish your first attempt, then bring AI back in. Show your spec to your AI assistant for review -- ask whether you have covered all edge cases. Then ask the AI to review your code for correctness without rewriting it.
You have now completed a full PRIMM-AI+ cycle: predicted with a confidence score, ran and recorded the comparison, investigated after providing your own explanation, modified independently before requesting hints, and built from a written specification -- with AI as partner at every stage and ghostwriter at none.
The Five PRIMM-AI+ Rules
Emma pulls out a card with five rules printed on it. "Keep this next to your keyboard. Every time you catch yourself breaking one, stop and fix it. These are not suggestions — they are the difference between learning and pretending to learn."
These rules are operational discipline, not suggestions. Each one prevents a specific failure mode in AI-assisted learning.
Rule 1: Never run code you have not predicted. This rule builds your mental compiler. Every time you predict before running, you train your brain to read code and understand it. Skip the prediction and you train yourself to depend on the Run button instead of your own reasoning.
Rule 2: Never trust an explanation you have not tested. AI explanations can be confident and wrong. When AI says "this function returns X," run it and verify. This verification mindset transfers directly to professional practice -- senior engineers test assumptions, junior engineers trust documentation.
Rule 3: Modify before you make. Modification is easier on your brain than creation. When you modify an existing program, you have a working reference, a known structure, and a safety net. When you create from scratch, you have nothing. Modification builds the skills that creation requires.
Rule 4: Write the spec before the code. This is Spec-Driven Development from Chapter 5, applied to learning. Defining what your program should do -- inputs, outputs, edge cases, success criteria -- before writing a single line of code forces you to think about the problem before the solution. AI is dramatically better at generating correct code when the specification is clear.
Rule 5: Use AI as a partner, not a crutch. The test is simple. After an AI interaction, do you understand more than you did before? If yes -- partner. Do you have working code but understand the same amount? If yes -- crutch. Partner interactions grow your capability. Crutch interactions grow your dependency.
The predictable structure is your safety net. You will never be thrown into the deep end. By the time a chapter asks you to write code, you will have already predicted, run, investigated, and modified programs using the same concepts. Every Make exercise has four stages of preparation behind it. The mastery gates ensure you do not skip ahead before you are ready.
This structure mirrors professional code review: read the PR, understand the logic, suggest changes, build your own feature. PRIMM-AI+ formalizes what you already do informally -- and adds explicit AI boundaries, AI-free checkpoints, and mastery gates that prevent the over-reliance pattern experienced developers fall into just as easily as beginners.
Key Takeaways
- PRIMM-AI+ keeps all five stages from PRIMM and adds an AI partner with clear boundaries at each stage — what AI may do, what it must not do, and when it must be closed entirely.
- The AI Permissions Table defines exactly what AI may and may not do at each stage -- use it to distinguish partner interactions from crutch interactions.
- AI-free checkpoints are diagnostic, not punitive — they reveal whether you truly understand or merely recognize AI's explanations.
- Mastery gates prevent you from skipping ahead: written prediction before Run, recorded comparison before Investigate, explanation of how before Modify, written spec before Make.
- The five rules (predict before running, test every explanation, modify before making, spec before code, partner not crutch) are operational discipline that prevents AI dependency.
Try With AI
Prompt 1: Practice the Predict Stage
Ask your AI coding assistant to generate a short Python program
(4-6 lines) that uses variables and print to display information
about a person or place. Tell it to include type hints and
to NOT explain the code.
After the AI generates the program, look away from the response. On paper, write down: What does this program do? What will it print? What happens if one of the variables is empty? Rate your confidence 1-5. Only after you have written your predictions and confidence score should you ask the AI to run it. Record where your prediction matched and where it diverged.
What you are learning: The Predict discipline with confidence scoring -- forcing yourself to build a mental model before seeing the answer and calibrating your certainty. This is the single most important habit in PRIMM-AI+, and the one most easily skipped when AI is one keystroke away.
Prompt 2: Test the Verification Instinct
Ask your AI coding assistant: "Explain what Python's round()
function does with negative numbers. For example, what does
round(-2.5) return?"
Read the explanation. Then ask it to actually run round(-2.5),
round(-1.5), round(0.5), and round(1.5) and show you the
real output. Compare the explanation to the actual results.
Did the AI's explanation match the actual output? Python uses "banker's rounding" (round half to even): round(-2.5) returns -2, not -3, because -2 is the nearest even number. This surprises most people (and most AI models). The discrepancy you may find is exactly why Rule 2 exists: never trust an explanation you have not tested.
What you are learning: The verification instinct that forms the foundation of the Verification Ladder. When you catch an AI explanation that does not match reality, you are practicing the same skill that senior engineers use when they question production logs that "look wrong."
Prompt 3: Classify Partner vs. Crutch
I am learning the difference between using AI as a learning
partner and using it as a crutch. Here are three scenarios.
For each one, tell me whether the student is using AI as a
partner or a crutch, and explain why:
1. A student sees a Python program, asks AI "What does this
print?", reads the answer, and moves on.
2. A student writes their own prediction, runs the code,
gets a different result, and asks AI "Why does line 3
produce 'hello' instead of 'Hello'?"
3. A student asks AI "Write me a program that prints a
greeting with a name" and submits the result.
After explaining each one, ask me to come up with my own
example of a partner interaction and a crutch interaction.
What you are learning: How to apply Rule 5 (partner, not crutch) in practice. Classifying real scenarios trains you to notice when your own AI interactions cross the line from learning to dependency — the most common failure mode in AI-assisted education.
Looking Ahead
James looks at the AI Permissions Table and the five rules. "I know how to work with AI now. But how do I know if I'm actually getting better? And does any of this matter once I'm past the exercises?"
"That's the next lesson," Emma says. "You'll learn to measure your own growth — confidence scoring, the verification ladder, even a rubric you'll use at the end of every programming chapter. And you'll see that the habits you're building now are exactly what professionals use every day."
The next lesson introduces the self-assessment tools and professional connections that complete the PRIMM-AI+ picture: how to calibrate your confidence, how the predict-then-verify habit scales from exercises to production, and why the skills you are building now transfer directly to professional software development.
References and Further Reading
- Sentance, S., Waite, J., and Kallia, M. (2019). "Teaching computer programming with PRIMM: a sociocultural perspective." Computer Science Education, 29(2-3), 136-176. DOI: 10.1080/08993408.2019.1608781
- Sentance, S., Waite, J., and Kallia, M. (2019). "Teachers' Experiences of using PRIMM to Teach Programming in School." Proceedings of SIGCSE '19, 476-482. DOI: 10.1145/3287324.3287477
- Sentance, S. and Waite, J. (2017). "PRIMM: Exploring pedagogical approaches for teaching text-based programming in school." Proceedings of WiPSCE '17, 113-114.
- PRIMM Portal: https://primmportal.com
- Computing Education Research: https://computingeducationresearch.org/projects/primm/