From Founder Bottleneck to Owner Delegate: Scaling an AI-Native Company Past Its Owner's Attention
15 Concepts and two lab tracks. The "half-day" in the title is the simulated track: about 2 to 3 hours of conceptual reading plus 2 to 3 hours of simulated lab with mock endpoints. The full-implementation track is a separate 1-day sprint to 2-day workshop: about 3 hours of reading plus 6 to 10 hours of hands-on lab. The full track does not modify Paperclip's codebase; it registers your Identic AI as a Paperclip agent, builds the signing-and-verification layer in your own integration skill, and exercises the full cryptographic round-trip against real Paperclip approval routes. Pick the track before Decision 1; see Part 4 for details.
A continuation crash course. This is Course Eight of nine in the agentic-coding track, and it is the deep operationalization of Invariant 2 of the Agent Factory thesis: every human needs a delegate. Courses Five, Six, and Seven taught you to build, govern, and grow an AI-native company's hired Workers. What those three courses left unresolved is what happens when the workforce gets large enough that the human owner can no longer read every approval thread. Course Eight names this as the architecture's last bottleneck and teaches the trust-delegation pattern that removes it: how the owner's delegate (her Identic AI, Don Tapscott's term for a personal AI that carries its user's identity, preferences, and authority) takes routine governance traffic on her behalf, applies her judgment, and surfaces only the decisions that genuinely need her.
The single insight that makes everything else click: an AI-native company stops scaling at the owner's attention, not at the hiring API; the Owner Identic AI is the primitive that removes the owner as the bottleneck without removing the owner from control. Courses Five through Seven let you hire a thousand Workers. Nothing in those courses prevents the owner from drowning in approval threads when she does. Every Concept in Course Eight either expands what the delegate decides autonomously, or sharpens the seam where a decision must come back to the human. Both are the architecture.
The Agent Factory thesis names seven structural rules an AI-native company must obey. Invariant 2 says each human needs a personal AI agent, a "delegate," that holds their context, their judgment, and their authority on their behalf. Here is the architect's framing sentence, the one this entire course is built around:
"A founder can hire ten Workers and read every approval; a founder cannot hire a thousand Workers and read every approval. Without an Identic AI that acts on the owner's behalf, applying the owner's known judgment to routine decisions and surfacing only the ones that aren't routine, every previous invariant in the AI-native company caps out at the owner's attention. The owner's Identic AI is not a productivity tool. It is the only architectural answer to the question of how an AI-native company scales past its founder."
The thesis specifies OpenClaw as the delegate it ships. This course teaches how to configure OpenClaw for one specific use of the delegate: as the company owner's governance delegate, the agent that holds the owner's authority envelope and brokers approval traffic on her behalf. The course extends the thesis without contradicting it: the thesis defines the delegate; the course teaches what the delegate has to do, cryptographically and operationally, to safely absorb a workforce's worth of approval traffic.
Courses Three through Seven operationalized other invariants in depth: Three covered Invariant 4 (engine choice, the runtime each agent runs on), Four covered Invariant 5 (system of record via MCP), Five covered Invariant 7 (the nervous system with Inngest), Six covered Invariant 3 (the management layer with Paperclip), and Seven covered Invariant 6 (hiring as a callable capability). Course Eight completes Invariant 2 with the same depth. Course Nine then adds the cross-cutting discipline, eval-driven development, that turns every Worker, every hire, and every delegated decision into something measurably trustworthy in production.
Courses Five through Seven taught you to use Paperclip as the management layer, Inngest as the operational envelope, and Claude Managed Agents as the worked-example runtime for hired Workers. Course Eight introduces OpenClaw (openclaw.ai, github.com/openclaw/openclaw, docs.openclaw.ai) as the runtime for the owner's delegate, the same OpenClaw the Agent Factory thesis names in Invariant 2.
OpenClaw is an open-source personal AI assistant that runs on the user's local machine (Mac, Windows, or Linux), is reachable through chat apps the user already uses, and is built around persistent memory, user data sovereignty, and skill extensibility. It exposes 50+ integrations, including 15+ chat channels (WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and more). Concept 4 walks through what OpenClaw actually is, why the thesis ships it as the delegate, and what alternatives would look like if you wanted to swap the runtime.
The delegated governance primitive Course Eight teaches, how an owner's Identic AI signs requests to the company's Paperclip workforce, is not a shipped integration between OpenClaw and Paperclip. It is an architectural pattern Course Eight teaches you to assemble using OpenClaw's skills system and Paperclip's verified-identity primitives. Both products ship the building blocks; the course wires them together.
- OpenClaw is fast-moving. The project is open source under an MIT license, which is a real continuity guarantee, but it is still adding skills, integrations, and primitives weekly, and its institutional position has shifted more than once in its first six months. The course teaches against the stable surface of openclaw.ai and docs.openclaw.ai as of May 2026; specific dates, version numbers, and integration counts should be verified against the official site before being treated as authoritative. Treat anything not on those URLs as in motion.
- The owner's judgment-learning loop, how the Identic AI learns the owner's values from their past approval decisions, is at the frontier of what's shipped today. Course Eight teaches the architectural commitments and the practical patterns that work in May 2026, and explicitly names which parts are open research (Concepts 13 through 15).
The four claims in order:
- An AI-native company stops scaling at the owner's attention, not at the hiring API. Courses Five through Seven let you hire a thousand Workers; nothing in those courses prevents the owner from drowning in approval threads when she does.
- The architectural answer is to give the owner an Identic AI, a personal AI delegate that lives on the owner's hardware, knows her judgment patterns, and acts on her behalf for routine decisions while surfacing the consequential ones.
- OpenClaw is the credible shipped runtime for this in May 2026. It is open source, user-owned, reachable through the owner's existing chat apps, and built around persistent memory of the user. The course teaches how to configure it as a governance delegate, not as a general-purpose assistant.
- The trust-delegation primitive, the architectural pattern that lets the human owner and the owner's Identic AI both act with authority while remaining distinguishable in the audit log, is the central technical move Course Eight teaches. It is what makes delegated governance safe rather than reckless.
In Course Seven, we built a small AI company that could hire a fourth AI Worker (the Legal Specialist) under approval from the human owner. That works when the company has four Workers. When the company has four hundred Workers, the human owner can't read every approval anymore; she would be reading approvals all day. This course teaches the owner how to have her own personal AI assistant (using an open-source tool called OpenClaw that runs on her laptop) that reads the approvals on her behalf, makes the easy decisions itself using patterns it has learned from her past decisions, and only wakes her up when something genuinely needs her judgment. The course also teaches the company's management system how to tell the difference between the owner herself clicking "approve" and the owner's personal AI clicking "approve" on her behalf, because both are authorized, but the audit trail has to distinguish them.

The architectural payoff: the owner's attention is now spent only on decisions that genuinely benefit from a human's judgment.
Courses Five through Seven recap: where things were left (click to expand)
If you've just finished Course Seven, skim and move on. If you're picking this up cold or it's been a while, the four pieces of context below are load-bearing for the rest of Course Eight.
From Course Five (the operational envelope): the company's Workers run inside Inngest's durable-execution wrapper, which gives them crash-safe runs, retry-on-failure, structured logging to two specific tables (activity_log and cost_events), and the step.wait_for_event durable-pause primitive. That Inngest primitive lets a Worker pause for hours or days without consuming compute during the wait. Paperclip's approvals, introduced in Course Six, are a separate mechanism: an approval is a decision record, not a paused Worker. Approving one does not auto-resume anything; continuing the work is an explicit next step. What Course Eight changes is the governance decision: the routine approval that used to wait for the owner's attention is now fielded first by the owner's Identic AI, and only the consequential ones reach the owner herself.
From Course Six (the management plane): the company runs on Paperclip, an open-source management layer (docs.paperclip.ing) that gives the workforce an org chart, authority envelopes (rules bounding what each Worker can do), approval gates (refunds over $500 require human approval), and a shared system of record. The Workers in the worked example are Tier-1 Support, Tier-2 Specialist, and Manager-Agent. Course Eight extends Paperclip once more, adding the delegated-approval recognition path.
From Course Seven (the hiring API): Paperclip exposes hiring as a callable capability. The Manager-Agent can detect a capability gap and propose a new hire, which routes through the same approval gate. Course Seven also introduces the talent ledger (an SQL-queryable audit stream of every hire, eval, and retirement) and the auto-approval policy primitive, which auto-approves routine hires that meet envelope, budget, and eval-pack thresholds. The company hired a fourth Worker, the Legal Specialist, running on Claude Managed Agents via Paperclip's http adapter.
What's left at the human owner's keyboard after Course Seven: every approval that doesn't meet the auto-approval thresholds. Every envelope-extension hire. Every refund over the auto-approval ceiling. Every termination. Every CMA migration. Every standing-policy edit. This is the bottleneck Course Eight removes. Three terms recur throughout the course: Worker (an AI agent the company hired), envelope (the bounds on what a Worker is allowed to do), and Claude Managed Agents / CMA (the hosted-agent runtime Course Seven used for the worked example).
If any of the four feels shaky, the linked Course Five, Six, and Seven docs go all the way back to first principles.
Where this fits: cheat sheet
The 15 Concepts and 7 Decisions of Course Eight, at a glance:
| # | Concept | Part | One-line description |
|---|---|---|---|
| 1 | An AI-native company stops scaling at the owner's attention | Why | The math from Course Seven: about 5 to 10 approvals a day at 10 Workers; hundreds at 1,000. The owner becomes the bottleneck unless something acts on her behalf. |
| 2 | What Identic AI means: Tapscott's framing made concrete | Why | Five properties: personalized, value-reflecting, extension-of-self, self-sovereign, persistent memory. Distinguishes Identic AI from any other "AI assistant." |
| 3 | Why the owner specifically: not the workforce, not the customer | Why | Other Identic AI use cases exist (customer-side, employee-side), but the load-bearing case for the AI-native company is the owner's. The course teaches one well, not three poorly. |
| 4 | OpenClaw: what it actually is | Architecture | Verified from openclaw.ai. Local machine runtime, chat-app reachability, persistent memory, open source, user-owned data. |
| 5 | Persistent memory and the owner's local context | Architecture | Where the owner's accumulated judgment lives on her filesystem. The session primitive from docs.openclaw.ai. What persists, what doesn't. |
| 6 | Chat apps as the interface layer | Architecture | Why OpenClaw reaches the owner through WhatsApp, Telegram, and Discord rather than a web app. The owner is already in chat; the Identic AI lives where the owner is. |
| 7 | The trust-delegation problem | Governance | Two authorized principals (the owner-human and the owner's-Identic-AI). The Manager-Agent has to verify which one is acting and what authority that one carries. |
| 8 | Signed delegation from local credentials | Governance | What OpenClaw can sign with locally. What the Paperclip management layer verifies. Passkeys, hardware-backed keys, and the failure modes when the owner's machine is stolen. |
| 9 | The two-envelope intersection | Governance | The owner's authority envelope ("I can approve anything") meets the Identic AI's delegated envelope ("I can approve routine decisions up to these ceilings"). Their intersection is what executes. |
| Part 4 | The 7 lab Decisions | Lab | Install OpenClaw, onboard with the owner's context, build a Paperclip-integration skill, wire delegated approvals, demonstrate end-to-end, handle the owner-overrides case, test the device-switch and stolen-laptop cases. |
| 10 | What the Identic AI learns over six months | Audit | The owner's accumulated decision patterns. The judgment-learning loop. How OpenClaw's session system stores this. What makes a pattern teachable vs. what stays at the owner's level. |
| 11 | The governance ledger: the Identic AI's audit stream | Audit | A parallel audit log of what the Identic AI decided on the owner's behalf. The owner reads it weekly the same way the board reads Course Seven's talent ledger. |
| 12 | When the Identic AI's judgment and the owner's diverge | Audit | What happens when the Identic AI auto-approves something the owner would have declined. The recalibration loop. Why this is a healthy signal, not a failure. |
| 13 | Self-sovereign memory: where accumulated judgment lives long-term | Open | Three architectural options. What ships in May 2026. What Tapscott calls "reinventing the AI stack." Honest about what isn't solved. |
| 14 | Value alignment beyond pattern-matching | Open | The frontier: how an Identic AI learns the owner's values (the rules under the patterns), not just the patterns themselves. Research preview, not curriculum-ready. |
| 15 | What's next: the Identic AI economy and the eval discipline | Forward | The architectural completion at the end of Course Eight; Course Nine adds the eval discipline that makes the architecture measurably trustworthy. |
Are you ready for this course?
Five-item checklist. If any feels shaky, the linked refreshers below get you there.
- You completed Courses Five through Seven, or have built the equivalent: an Inngest-wrapped Worker, a Paperclip management layer with the approval primitive, and a working hiring API. Course Eight assumes the company-side architecture exists. If it doesn't, build it first.
- You can read TypeScript and shell scripts, even if you can't write them fluently. The lab uses TypeScript for the Paperclip-integration skill and shell for OpenClaw setup. Your AI assistant (Claude Code or OpenCode) types both; you brief, review, and approve. The briefing pattern from Courses Three through Seven continues unchanged.
- You have a Mac, Linux, or Windows machine you can install OpenClaw on. OpenClaw's one-line installer (
curl -fsSL https://openclaw.ai/install.sh | bash) needs admin rights on first run for Homebrew on macOS. The lab depends on having a real running OpenClaw instance; there is no cloud-only shortcut for Course Eight. - You have at least one chat app you can use for the lab: WhatsApp, Telegram, Discord, Slack, Signal, or iMessage. The course defaults to Telegram in examples because the OpenClaw docs do, but the choice is yours.
- You're comfortable with the idea of an AI making decisions on your behalf, even if you're skeptical about how far it should go. The course's central technical move is delegated authority; if you can't get past "the AI is approving things in my name," the lab won't land. Concept 12 addresses the failure cases directly; skim it before deciding.
If any of the five feel shaky, start with the linked refreshers before continuing. The course is dense; the prereqs make it feel light.
Course Eight closes the architectural side of the Agent Factory track; Course Nine adds the discipline of eval-driven development that turns the architecture into measurably trustworthy production behavior. If the five prerequisites above sound unfamiliar, work backwards: Course Seven: From Fixed to Dynamic Workforce is the direct prerequisite (the hiring API and the management plane Course Eight extends). Before that: Course Six: From One Worker to a Workforce (Paperclip plus the management plane), Course Five: From Digital FTE to Production Worker (Inngest plus the operational envelope), Course Three: Build AI Agents (the agent loop), and the PRIMM-AI+ chapter if you're new to AI-assisted coding entirely. Course Eight references Course Six and Seven concepts every page or two; coming in cold is harder than completing the on-ramp.
If you can't do the on-ramp right now but want to follow the concepts, you can fake the prerequisites with less than the full stack: read the Paperclip Quickstart and Approvals page; read the OpenClaw Getting Started; skim PRIMM-AI+ Lesson 1 for the prediction-then-run rhythm the course uses in every Concept. With those substitutes, you can follow Parts 1 through 3 and Parts 5 through 6 conceptually. The Part 4 lab requires both a working Paperclip install (from Course Seven) and a working OpenClaw install (from this course); there are no shortcuts.
Glossary: 26 terms a beginner can reference (click to expand)
Course Eight uses vocabulary from across the Agent Factory track plus several new terms specific to the Identic AI architecture. Terms grouped by what they describe.
People and roles
- Owner / Founder / CEO: the human principal who owns the AI-native company. Courses Six and Seven called this "the board" when the decision was governance-level; Course Eight uses "owner" to emphasize that we're talking about an individual human with an Identic AI, not a multi-person board.
- Identic AI: Don Tapscott's term (HBR IdeaCast Ep. 1066, Feb 17, 2026, and his book You to the Power of Two) for a personalized AI that reflects its user's values, persists their context, and acts as an extension of them. Five properties: personalized, value-reflecting, extension-of-self, self-sovereign, persistent memory.
- Maya: Course Eight's worked-example owner. Founder and CEO of the customer-support company built across Courses Five through Seven. The course follows her through the lab.
OpenClaw primitives
- OpenClaw: open-source personal AI assistant (openclaw.ai). Runs on the user's local machine; reachable through chat apps; persistent memory; full system access; skills and plugins. Course Eight's worked-example runtime for Maya's Identic AI.
- Skill: OpenClaw's extensibility primitive. A skill is a unit of capability (read Gmail, control Spotify, talk to Paperclip). Skills can be written by the user, contributed by the community (ClawHub), or written by OpenClaw itself. Note: same word as Paperclip's skill primitive but a distinct concept; Paperclip skills install onto Workers, OpenClaw skills install onto the user's local OpenClaw.
- Session: OpenClaw's persistent-memory primitive (docs.openclaw.ai/concepts/session). The unit of accumulated context for a user across time, across devices, across chat apps. What makes OpenClaw an Identic AI rather than a stateless assistant.
- Onboard: OpenClaw's setup process.
openclaw onboardwalks the user through persona setup, model selection, chat-app integration, and initial skill installation. - Companion App: OpenClaw's macOS menubar app (beta as of May 2026). An alternative to chat-app interaction; useful when the owner is at her desk rather than on her phone.
Course Eight's architectural concepts
- Owner Identic AI: the specific configuration of an Identic AI that acts as a governance delegate for an AI-native company owner. The course's central technical artifact. Maya's OpenClaw, configured per Course Eight's pattern, is her Owner Identic AI.
- Delegated governance: the load-bearing primitive of Course Eight. The pattern by which Maya's Owner Identic AI receives approval requests from the Paperclip management layer, applies Maya's known judgment to the routine ones, and surfaces only the consequential ones to Maya herself.
- Trust delegation: the verification mechanism that lets the Paperclip management layer distinguish between Maya-the-human and Maya's-Identic-AI when an approval click arrives. Both are authorized; the audit trail records which one acted.
- Owner-authority envelope: Maya's personal authority envelope (analogous to Course Six's authority envelope concept). What Maya herself can approve. Distinct from the Identic AI's delegated envelope.
- Identic AI's delegated envelope: the subset of Maya's authority that her Identic AI is allowed to exercise on her behalf. Set by Maya, recorded in the governance ledger, narrower than Maya's full authority by design.
- Governance ledger: Course Eight's parallel to Course Seven's talent ledger. The append-only audit stream of every decision Maya's Identic AI made on her behalf. Maya reads it weekly.
- Judgment-learning loop: how Maya's Identic AI accumulates a model of her decisions over time. Concept 10 covers the mechanics; Concept 14 covers the frontier of value alignment (the open research problem of going from patterns to values).
- Recalibration: the loop by which Maya corrects her Identic AI when its judgment diverges from hers. Concept 12 covers this. Recalibration events are themselves recorded in the governance ledger.
From Courses Three through Seven (referenced, defined more fully there)
- Worker: a single AI agent doing work for the company. Course Six defined this; Course Eight uses it throughout.
- Manager-Agent: the Worker that orchestrates other Workers. Course Six and Seven's protagonist; Course Eight's interlocutor for Maya's Identic AI.
- Paperclip: the management layer Course Six introduced and Course Seven extended. Course Eight extends it once more, adding the delegated-approval recognition path.
- Authority envelope: Course Six's mechanism for bounding what a Worker can do. The Course Eight analog is the owner-authority envelope plus delegated envelope split.
- Activity log: the append-only audit stream of every workforce action. Distinct from Course Eight's governance ledger (which is the Identic AI's audit stream).
- Approval: Course Six and Seven's governance gate. In Paperclip, an approval is a decision record, not a paused process: a board member or a registered agent records a decision on it. (
step.wait_for_eventis Inngest's durable-pause primitive from Course Five, a separate mechanism; Paperclip approvals are not wired to it.) Course Eight changes who fields the routine ones: Maya's Identic AI before Maya herself.
Thesis-level
- AI-native company: Course Six and Seven's thesis-level term for a company whose work is done primarily by AI Workers under human governance. Course Eight argues that the human governance part of this definition is incoherent at scale without an Owner Identic AI.
- Invariant 2 / the delegate: Invariant 2 of the Agent Factory thesis: every human needs a delegate. A personal agent that holds the user's identity, context, and authority envelope, and brokers all downstream work on the user's behalf. The thesis names OpenClaw as the delegate; Course Eight teaches how to configure that delegate as a governance delegate for an AI-native company owner.
- Self-sovereign: Tapscott's commitment that an Identic AI should be owned by the user, not a platform. Course Eight inherits this commitment; OpenClaw is the shipped product that operationalizes it.
Part 1: Why the owner is the bottleneck
The thesis of Courses Five through Seven was that an AI-native company can scale its workforce indefinitely. The hiring API is callable. The approval primitive is reusable. The talent ledger is queryable. Nothing in the architecture stops the company from hiring its tenth Worker, its hundredth, its thousandth.
But something does cap the architecture, and Course Seven left it implicit. Every consequential decision still routes to the human owner's attention. A company that can hire 1,000 Workers but expects one human to read 1,000 Workers' worth of approval threads is not scaling. It is moving the bottleneck from "recruiting" to "owner attention." Part 1 makes this argument concrete, then introduces the architectural response: the owner's Identic AI. Three Concepts.
Concept 1: An AI-native company stops scaling at the owner's attention, not at the hiring API
This is the heaviest concept in the course because everything else rests on it. If the argument here doesn't land, the rest of Course Eight reads like forward-looking speculation about personal AI. If the argument does land, the rest of Course Eight reads like solving a concrete problem the previous three courses created.
The argument is structural, but the easiest way to see it is to put numbers on it. We'll use Course Seven's actual worked example: the customer-support company built across Courses Five through Seven, which by the end of Course Seven has four Workers (Tier-1 Support, Tier-2 Specialist, Manager-Agent, and the Legal Specialist just hired in Course Seven's Decision 4). Maya, the company's founder and CEO, is the human owner. By the end of Course Seven, every consequential approval still pings Maya's phone.
The math at four Workers. Let's count what hits Maya's phone in a typical week with the Course Seven workforce, using realistic rates from the operational examples in Courses Six and Seven.
| Approval source | Per Worker per week | At 4 Workers |
|---|---|---|
Refund > envelope ceiling (Course Six's refund_max=$500) | ~2 | ~8 |
| Envelope-extension hire (Course Seven's Concept 8: auto-approval can't bypass) | ~0.1 | ~0.4 |
| Termination decision | ~0.05 | ~0.2 |
| Budget override (Worker hit its monthly ceiling) | ~0.25 | ~1 |
| CMA migration or substrate change | ~0.05 | ~0.2 |
| Standing-policy edit (new auto-approval rule) | ~0.5 | ~2 |
| Total per week | ~12 |
Twelve approval events per week. About two per business day. Maya can handle this on her phone between meetings. The owner-attention cost is real but manageable. This is the regime Course Seven's architecture was designed for.
The math at forty Workers. Now suppose Maya's company has grown over six months. The Manager-Agent has detected eight more capability gaps and hired Workers for each: a Billing Specialist, a Refund Analyst, an Onboarding Worker, a Churn-Risk Worker, three more Tier-2 Specialists for different product lines, and a Senior Legal Reviewer. The auto-approval policy from Concept 9 of Course Seven covers most Tier-1 burst hires. So far so good. But each Worker generates its own stream of consequential approvals.
| At 40 Workers | Per week |
|---|---|
| Refunds > envelope ceiling | ~80 |
| Envelope-extension hires | ~4 |
| Termination decisions | ~2 |
| Budget overrides | ~10 |
| CMA migrations / substrate changes | ~2 |
| Standing-policy edits | ~20 |
| Total per week | ~118 |
About 17 per business day. Maya is now spending three to four hours daily reading approval threads. This is the regime where Maya starts asking whether she should hire a human chief-of-staff to triage the queue. That's a tell. The architecture is starting to fail the original thesis: Maya is being asked to add humans to scale the workforce.
The math at four hundred Workers. Suppose Maya keeps growing. By month 18, the company runs 400 Workers, still small for an AI-native company, but well past the size where the Course 5-7 architecture was stress-tested.
| At 400 Workers | Per week |
|---|---|
| Refunds > envelope ceiling | ~800 |
| Envelope-extension hires | ~40 |
| Termination decisions | ~20 |
| Budget overrides | ~100 |
| CMA migrations / substrate changes | ~20 |
| Standing-policy edits | ~200 |
| Total per week | ~1,180 |
About 170 approval events per business day. Maya cannot read 170 approval threads per day, no matter how fast she scrolls. The architecture has stopped working. The hiring loop is now generating more approval traffic than the owner can process. Maya has hit the owner-attention bottleneck.

The scaling math gives the bottleneck a number: an AI-native company hits its owner-attention ceiling somewhere between 10 and 40 Workers, well before any other scaling constraint binds.
What are the architectural responses available to her? There are exactly three, and the course needs you to internalize that two of them are wrong before the third one (the Identic AI) reads as the answer rather than as a novelty.
Wrong response A: auto-approve more aggressively. Maya could expand the auto-approval policy from Concept 9 of Course Seven to cover more categories. Currently the policy auto-approves Tier-1 burst hires under a $250/month envelope; she could raise the ceiling to $1,000/month, or include refund decisions up to $2,000. This would reduce the queue. It would also abandon the safety property that the entire seven-invariant thesis is built on. The reason Course Seven kept the envelope-extension check outside the auto-approval surface (Concept 8) is that any authority a Worker has that no Worker had before is a decision the human has to consciously make. Auto-approving more aggressively reverses that commitment. It says: we can't be bothered to govern at scale, so we'll declare that scale doesn't need governance. That's the AI-native equivalent of an unreviewed pull-request culture. It works until it doesn't.
Wrong response B: add humans to the approval pool. Maya could hire a human chief-of-staff to triage approvals, or appoint two co-founders as additional approvers. This would reduce the per-person queue. It would also reintroduce the org-chart hierarchy that AI-native companies were supposed to flatten. Course Six's whole architectural argument was that the Manager-Agent absorbs the middle-management coordination layer; if Maya now adds a human management layer above the Manager-Agent to handle approvals, she has rebuilt the company shape Courses 5-7 spent three courses removing. Worse: each human she adds has the same scaling ceiling she does. Two co-founders process about 30 approvals per business day instead of 17. Three handle about 50. The architecture still caps out at about 10 times the workforce per human; it just caps out at a slightly higher number. You cannot scale an AI-native company by adding humans to its governance loop. If you could, it wouldn't be AI-native.
Right response: the owner's Identic AI. A personal AI delegate, running on Maya's hardware, that has learned Maya's approval patterns from her past 200 decisions. When a new approval request arrives, the Identic AI either resolves it autonomously (when Maya's own pattern is clear and the request is routine) or surfaces it to Maya (when the request is novel, consequential, or outside the patterns the Identic AI has confidently learned). Maya's attention is now spent only on the decisions that genuinely benefit from a human's judgment, and the workforce can grow without re-creating the bottleneck.
Notice what this response does not do. It does not auto-approve under a policy Maya wrote once and forgot about. It applies Maya's judgment, which the Identic AI has accumulated from watching Maya decide. It does not delegate the authority: Maya remains the principal; the Identic AI acts on her behalf, with her explicit consent, and every action is recorded in an audit stream Maya reviews. The owner remains in the loop; the owner's attention does not remain in the loop on routine traffic.
This is the architectural primitive Invariant 2 of the thesis names: the delegate. The thesis says it abstractly ("every human needs a delegate that holds their context, represents their judgment, carries their authority envelope, and brokers all downstream work on their behalf"). Course Eight operationalizes it concretely for the owner of an AI-native company. Without it, the previous six operationalized invariants (engines, system of record, nervous system, management layer, hiring API) cap out at a workforce of a few dozen. With it, the architecture scales to the size of the workforce, not the size of the owner's calendar.
You're Maya. Your company has 80 Workers at month nine. The auto-approval policy from Concept 9 of Course Seven covers Tier-1 burst hires. The Manager-Agent's gap-detection signals fire on a new pattern: an unexpected volume of Spanish-language customer questions over three weeks. The Manager-Agent drafts a hire proposal for a Spanish-Language Tier-2 Specialist. The proposed authority envelope is identical to the existing English-language Tier-2 (no envelope-extension check needed). The proposed budget is $800/month, well within Course Seven's auto-approval ceiling. The eval pack passes.
Two predictions, written separately on paper before you read on:
- Under Course Seven's auto-approval policy (Concept 9 of Course Seven, which only checks envelope, budget, and eval-pack thresholds): does this hire get auto-approved without involving Maya? Predict yes or no.
- Under what Course Eight will teach (the Owner Identic AI applying Maya's accumulated judgment): does Maya's Identic AI auto-approve this, or surface it to Maya? Predict surface or auto-approve.
If your two predictions are the same, you haven't yet seen the distinction Course Eight is built on. If they're different, you've seen it. Either is fine; read on.
Answers:
- Yes, Course Seven's auto-approval policy fires. The hire fits the policy's stated criteria: known envelope, eval pack passes, within budget ceiling. The policy has no way to know this hire is different from a routine burst-capacity hire.
- The Owner Identic AI surfaces it to Maya, not because Course Seven's policy is wrong, but because the Identic AI encodes a class of judgment Course Seven's policy cannot. Spanish-language support is a strategic expansion of Maya's market, not just additional capacity. Maya might want to think about whether the company is ready to commit to bilingual support contractually, whether the privacy policy and terms of service need Spanish translations, whether this hire signals a broader product direction.
The distinction is rule vs. judgment. A rule can be encoded once and applied uniformly. A judgment is learned from the owner's past decisions and applied per-situation. Course Seven's policy is a rule; the Owner Identic AI is a judgment-applier. Both are valuable: the rule handles 90% of routine cases at zero owner-attention cost, the judgment handles the remaining 10% that genuinely benefit from a human-trained pattern. This is what the course will teach you to build.
Bottom line: Concept 1's math gives the workforce-vs-attention bottleneck a number. The AI-native company hits its owner-attention ceiling somewhere between 10 and 40 Workers, well before any other scaling constraint. The delegate primitive Invariant 2 names is what removes the cap, not by reducing the number of decisions, but by routing routine ones through the owner's known judgment instead of the owner's actual attention. Two architectural responses are wrong (auto-approve more aggressively; add humans to the loop) and only one is right (the delegate). The math is the case for why.
Concept 2: What Identic AI means: Tapscott's framing made concrete
The architectural primitive that solves the bottleneck is Identic AI, as defined by Don Tapscott in his HBR IdeaCast interview (Episode 1066, February 17, 2026) and his book You to the Power of Two: Redefining Human Potential in the Age of Identic AI. Course Eight inherits Tapscott's vocabulary because it's the cleanest available framing for what we're building, and because adopting it ties the course to a current management-discourse conversation rather than coining yet another term.
Tapscott's definition, verbatim from the transcript:
"the rise of intelligent companions that really learn who we are, and they reflect our values and ultimately operate as extensions of ourselves... a subset of agentic AI, we call them identic AI"
Five properties he names, each of which Course Eight will return to:
- Personalized: a single individual's agent, not a workforce agent. Maya's Identic AI is Maya's. There is no shared instance, no team account, no organizational tier. The unit is one human, one Identic AI.
- Reflects user's values and judgment: trained on the user's documents, decisions, communications, accumulated context. The Identic AI is not a generic AI assistant configured with the user's name; it is an AI that has learned the user over time.
- Extension of self: meant to feel like extended cognition, not a tool the user invokes. In Tapscott's framing: "it's becoming a part of the human experience." In Maya's case: she does not "log in to her Identic AI" the way she logs in to Paperclip. The Identic AI is reachable in her chat apps, on her phone, in her menu bar; it acts when she acts; it knows what she's working on.
- Self-sovereign: user-owned, not platform-owned. Tapscott's emphasis throughout the transcript: "identic AI needs to be self-sovereign. We need to own our own superintelligence." The accumulated context, the learned patterns, the judgment model: these belong to the user. They are not held hostage by a platform. They survive the user changing devices, changing employers, changing chat-app preferences.
- Persistent memory: accumulates knowledge of the user continuously. Tapscott's example from the transcript: "For digital Don, for example, I've input about 500 documents, everything I could find that I've written my speeches, my PowerPoints, my books and articles and interviews and all kinds of stuff like that. And it's learning about me and how I view things and how I think about things."

All five properties must be present together; the fifth, self-sovereign, is the one most easily abandoned and the one that determines the runtime choice in Concept 4.
These five properties are the discriminating definition. They distinguish Identic AI from:
- General-purpose AI assistants (which lack persistent memory and don't reflect the user's specific judgment)
- Workforce agents (which are owned by a company, not by an individual; Course 5-7's Workers are workforce agents)
- Customer-side AI agents (which serve a user but are typically platform-owned and platform-hosted; they fail the self-sovereign property)
- Personal assistants without learning (a chat-app bot that schedules meetings is helpful but doesn't accumulate the user's judgment over time)
The discriminator most readers under-weight is self-sovereignty. Tapscott returns to it repeatedly because, in his view, the AI industry's default trajectory is toward platform-owned Identic AI and against user-owned Identic AI. From the transcript: "The biggest question for me is who's going to own digital Adi? Mark Zuckerberg? Google? This is an extension of you and your intelligence. And if they own it, that's a big problem." If Maya's Identic AI is hosted by a platform that can read her judgment patterns, modify them, downrank them, or revoke her access to them, then her Identic AI is not actually acting on her behalf. It is acting on the platform's behalf with Maya's data. Course Eight commits to the self-sovereign property as a non-negotiable architectural property, and that commitment is what determines the runtime choice in Concept 4 (OpenClaw, because it runs on Maya's hardware and stores her context on her filesystem, not as an opt-in, as the default architecture).
How Identic AI maps onto Course Eight's specific use case. Course Eight is not teaching general-purpose Identic AI. It is teaching one specific configuration: the Owner Identic AI, an Identic AI configured to act as a governance delegate for the owner of an AI-native company. Other valid Identic AI configurations exist (a personal Identic AI that runs your household, manages your calendar, drafts your email; Tapscott's "digital Don" example fits this) and Course Eight will return to them briefly in Concept 15 as the broader Identic AI economy the architecture enables. But the course's load-bearing example is Maya's Identic AI in its capacity as her governance delegate. That capacity is what operationalizes Invariant 2 for the AI-native company, and that's the capacity Course Eight teaches.
The four lower-numbered properties (personalized, value-reflecting, extension-of-self, persistent memory) are operational requirements; the fifth (self-sovereign) is an architectural commitment. Without the fifth, the other four still work, but they work against the owner rather than for her. Course Eight teaches all five, and treats the fifth as the load-bearing one.
Bottom line: Identic AI, in Don Tapscott's framing, is a personalized AI that reflects its user's values, persists their context across time, acts as an extension of them, and remains owned by the user, not by a platform. Five properties; the fifth (self-sovereign) is the one most easily abandoned and the one the course refuses to compromise on. Owner Identic AI is the specific configuration Course Eight teaches: an Identic AI configured to act as the AI-native company owner's governance delegate.
Concept 3: Why the owner specifically: not the workforce, not the customer
A reasonable reader at this point can ask: if Identic AI is so important, why is the course only teaching the owner's? Why not teach customer-side Identic AI, or employee-side Identic AI, or peer-to-peer Identic AI interactions, or the Identic AI economy more broadly? The answer is that Course Eight makes a deliberate scope choice, and the reasoning matters.
There are at least four categories of Identic AI use case relevant to an AI-native company:
| Use case | Whose Identic AI | What it does | Maturity in May 2026 |
|---|---|---|---|
| Owner / governance delegate | The owner of an AI-native company | Pre-filters approval traffic, applies owner's judgment, surfaces consequential decisions | Shipped (OpenClaw plus the pattern Course Eight teaches) |
| Customer-side personal AI | An individual customer | Interacts with companies on the customer's behalf, holds the customer's context, makes purchases | Partially shipped (OpenClaw exists; companies don't yet accept signed customer-Identic-AI requests as a normal interaction channel) |
| Employee-side delegate | An employee of an AI-native company | Drafts emails, prepares for meetings, manages the employee's own work delegated by the workforce | Partially shipped (OpenClaw can do this; integration with company workflows varies) |
| Peer-to-peer / Identic AI economy | Two individuals' Identic AIs interacting directly | Negotiate deals, schedule, coordinate without humans in the loop | Speculative; Tapscott's "infinite number of vice presidents" end-state |
Course Eight teaches the first one (the owner's) and does not teach the others. Three reasons:
First, the load-bearing argument from Concept 1 is about the owner specifically. The scaling-impossibility math doesn't apply to customers or employees in the same way. A customer with no Identic AI is mildly inconvenienced; the company workforce can still serve them through traditional channels. An employee without an Identic AI works the same way employees always have; productivity is lower than it could be but the company functions. Only the owner's bottleneck stops the company from scaling. The course's claim, that the Owner Identic AI is what operationalizes Invariant 2 for an AI-native company, is true specifically because the owner's case is the one that makes the architecture incoherent without it. The other cases are valuable but not load-bearing.
Second, the customer-side Identic AI use case isn't fully shipped yet, and the architecture has open gaps Course Eight cannot honestly close. Tapscott's transcript gestures at the customer-side use case ("a doctor that's been to every medical school in the world, your tutor that's literally a know-it-all") but the trust-delegation between a stranger's Identic AI and a company's workforce is a harder open problem than the owner's case. With Maya, both the owner-authority envelope and the Identic AI's delegated envelope are configured by Maya herself; the trust model is one-party. With an arbitrary customer, the company has to verify identity, authority, and authorization across an untrusted boundary; the trust model is multi-party with no shared root. Courses 5-7 didn't teach that level of cross-party trust either; teaching it in Course Eight would require introducing primitives the rest of the track doesn't depend on. Honest pedagogy: teach the load-bearing case well; gesture at the broader case as the open frontier.
Third, a course that tries to teach all four cases teaches none of them well. The track's pattern across Courses 3-7 is to teach one thing deeply, with a complete worked example, and to gesture at adjacent cases in sidebars and the forward-look section. Course Six taught the workforce, not the broader AI-native organization. Course Seven taught the hiring API, not the full lifecycle including offboarding to outside counsel. Course Eight teaches the Owner Identic AI, not the broader Identic AI economy. Concept 15 returns to the broader picture as the closing forward-look, naming the customer-side and employee-side and peer-to-peer cases as the next architectural frontiers. But the lab, the worked example, the 15 Concepts of teaching: all about Maya's case.
A useful test for whether Course Eight made the right scope choice: if a reader finishes Course Eight and successfully sets up Maya's Owner Identic AI in their own AI-native company, the architecture works at scale. The owner's attention is no longer the bottleneck. The seven invariants from Courses 3-7 are now complete. If the same reader wants to extend Owner Identic AI patterns to other use cases (a customer-side Identic AI in their product, an employee-side delegate for their team), they have the architectural tools to do so, and Course Eight names where each extension is shipped vs. open. The course delivers the load-bearing case completely and names the rest honestly. That's the pedagogical commitment.
Course Eight teaches the Owner Identic AI and not the other three use cases. Each option below is a real argument someone could make for that scope choice. Predict which one is the course's actual reasoning, before you read on. The point of the prediction is that more than one of these is defensible; you have to pick the one that matches the course's central argument specifically.
(a) The owner's case is the only one mature enough to ship: customer-side and peer-to-peer Identic AI need cross-party trust primitives that don't exist in May 2026, so the course teaches what's buildable today. (b) The owner's case is the load-bearing one: Concept 1's scaling math shows that without the Owner Identic AI, Courses 5-7's architecture itself is incoherent past a few dozen Workers. The other cases are valuable but the company still functions without them. (c) The owner's case is the pedagogically simplest: it's a one-party trust model (Maya configures both envelopes herself), so it's the cleanest place to teach the trust-delegation primitive before the harder cases. (d) The owner's case has the clearest commercial demand: AI-native company owners are the buyers who will pay for a governance delegate first, so the course teaches to the market.
Answer: (b). All four contain something true, which is why this is a real prediction and not a reading check. (a) is true (Concept 3 says exactly this about customer-side maturity) but it's a consequence of the scope choice, not the reason for it. (c) is true (Concept 3 names the one-party trust model as a teaching advantage) but again it's a benefit, not the load-bearing reason. (d) may well be true but the course makes no commercial-demand argument. The course's actual reasoning is (b): the owner's bottleneck is the one whose absence breaks the architecture. Concept 1's math is the case. The customer-side, employee-side, and peer-to-peer cases are valid Identic AI configurations, but the company functions without them; only the owner's case makes Courses 5-7 incoherent at scale. Course Eight teaches the load-bearing case completely; Concept 15 names the others as the open frontier.
Bottom line: Course Eight teaches one specific Identic AI configuration, the Owner Identic AI, because that's the use case whose absence makes Courses 5-7's architecture incoherent at scale. Other Identic AI cases (customer-side, employee-side, peer-to-peer) are valuable but not load-bearing for the AI-native company's scaling argument. The course delivers the load-bearing case completely and names the rest honestly in Concept 15.
Part 2: The OpenClaw runtime
Part 1 named the problem and committed to a scope. Part 2 introduces the runtime, OpenClaw, and walks through what it actually is, where it lives, and how Maya talks to it. Three Concepts: what OpenClaw is, verified from the official source (Concept 4); where Maya's accumulated context lives on her filesystem (Concept 5); and why OpenClaw chose chat apps over a web app as the interface layer (Concept 6).
Concept 4: OpenClaw, what it actually is
OpenClaw is the runtime Course Eight teaches against. Before any of the architectural patterns in Concepts 7 through 15 make sense, you need a clean mental model of what OpenClaw actually is, verified from openclaw.ai and docs.openclaw.ai, not from pattern-matching against earlier personal-AI products.
The verified facts, drawn directly from the official site and docs:
- Runs on the user's local machine. Mac, Windows, or Linux. The installer is
curl -fsSL https://openclaw.ai/install.sh | bashornpm i -g openclaw. After install,openclaw onboardwalks the setup. - Open source. github.com/openclaw/openclaw. MIT-licensed. The user can read, fork, modify, and self-host.
- Reachable through chat apps the user already uses. OpenClaw lists 50+ integrations, including 15+ chat channels (WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and more) at openclaw.ai/integrations. The user does not install a new app; they message OpenClaw in their existing chat app.
- Persistent memory. The site's headline phrase: "Remembers you and becomes uniquely yours. Your preferences, your context, your AI." The
sessionprimitive (docs.openclaw.ai/concepts/session) is the unit of accumulated context. - Full system access on the user's machine. Read and write files, run shell commands, execute scripts (docs.openclaw.ai/bash). Full access or sandboxed: the user's choice.
- Browser control. Can browse the web, fill forms, extract data from any site (docs.openclaw.ai/browser).
- Skills and plugins. Extensible with community skills (clawhub.ai) or user-built ones (docs.openclaw.ai/skills). OpenClaw can even write its own.
- Model-agnostic. Anthropic, OpenAI, or local models. The user picks during onboarding.
- Companion app. A macOS menubar app (beta in May 2026) for desktop access alongside the chat-app interface.
Institutional signals worth noting. OpenClaw's most load-bearing signal is structural rather than reputational: the project is open source under an MIT license. That license is a continuity guarantee, because the codebase at github.com/openclaw/openclaw can be forked and self-hosted by anyone, regardless of what happens to the project's stewards. Beyond the license, the project has a very large GitHub following (hundreds of thousands of stars), and it has major-vendor sponsorship; as of May 2026 the sponsor list includes OpenAI, GitHub, NVIDIA, Vercel, and Convex, though a sponsor list is the kind of fact that shifts, so treat it as an as-of-May-2026 example rather than a load-bearing claim. The founder, Peter Steinberger, joined OpenAI in early 2026 while the project continues as an open-source effort. Specific counts and sponsor details should be verified at openclaw.ai and the project's GitHub before being quoted authoritatively.
Why these signals matter for Course Eight. Course Eight's central architectural commitment is self-sovereignty: Maya's accumulated judgment is hers, not a platform's. The risk with any open-source personal-AI project is that it shuts down, gets acquired into a closed-source product, or pivots its data-ownership model. The signal that pushes hardest against that risk is the MIT license itself. Institutional backing (the major sponsor list, the founder's hire by OpenAI) indicates the project has resources and an industry stake, but sponsorship and hiring can both change. Open source is the durability guarantee that survives a change of stewards: even if the project's direction shifts, the codebase can be forked and self-hosted, and Maya's runtime can continue. None of these signals is a proof of long-term durability. Together, with the MIT license as the anchor, they make OpenClaw the most credible self-sovereign Identic AI runtime to build a curriculum against in May 2026.
The mental model: how OpenClaw differs from a chatbot. A reader coming to OpenClaw from ChatGPT or Claude.ai has a mental model of "an AI in a chat window I open when I want to ask something." That model is wrong for an Identic AI in three specific ways:
| Mental model | Chatbot | Owner Identic AI (OpenClaw) |
|---|---|---|
| When does the AI act? | Only when the user explicitly opens the app and types | Continuously, in the background; proactively when the user has standing instructions; via heartbeat checks |
| Where does the AI live? | In a tab the user opens | In the user's chat apps already (WhatsApp, Telegram) and in the user's filesystem |
| What does the AI remember? | Within-conversation context that's discarded between sessions | Persistent session across all interactions, all devices, all time |
| Who initiates? | The user, every time | Either party: the user can ask, and the AI can also surface things proactively |
The fourth row is the most important and the easiest to miss. In a chatbot, the user is always the initiator. In an Identic AI, the AI is also an initiator. Claudia can send Maya a Telegram message at any moment ("a customer's refund request just arrived; based on our patterns, this is one I'd auto-approve. OK to proceed?"). This bidirectional initiation is what makes an Identic AI useful for governance delegation. A chatbot Maya has to open and consult cannot pre-filter approval threads; an Identic AI that messages Maya when it needs to can.
Concretely, what a first session looks like. A user who runs curl -fsSL https://openclaw.ai/install.sh | bash then openclaw onboard walks through approximately this sequence (per the docs at docs.openclaw.ai/getting-started):
- Pick a model. Anthropic Claude, OpenAI GPT, or a local model. (For Course Eight, we'll use Claude opus-4-7 as the default.)
- Name your OpenClaw. Users name their Identic AI, "Claudia," "Jarvis," "Brosef," and pick a persona tone. The naming makes the Identic AI feel like an entity rather than a service: once it has a name, users tend to refer to their OpenClaw by name in conversation.
- Pick a primary chat app. Telegram (most common, easiest bot setup), WhatsApp, Discord, Signal, iMessage, or Slack. The user creates a bot in their chat app (for example, via
@BotFatheron Telegram) and pastes the token into OpenClaw. - Initial skill installation. OpenClaw asks what the user wants their Identic AI to be good at: calendar management, email, code, governance, all of the above. Selected skills install from clawhub.ai (the community skills marketplace). For Course Eight's purposes, none of the default skills handle Paperclip integration; we'll write that one in Decision 3.
- A first conversation. The user opens their chat app, finds their OpenClaw bot, types "hello." OpenClaw responds in the chosen persona, and from that moment on, the Identic AI is reachable in the user's normal communication flow. No separate app to open.
The whole onboard takes 5 to 10 minutes. The architectural commitments Course Eight depends on, local filesystem storage, persistent session, chat-app reachability, are baked into the install rather than configured per-user.
Why OpenClaw as Maya's runtime, not an alternative:
| Alternative | Why not for Course Eight |
|---|---|
| Claude Agent SDK + custom Identic AI | You can build it, but you assemble persistence, chat-app integration, skill management, and onboarding yourself. OpenClaw ships all of those. |
| OpenAI Agents SDK + custom Identic AI | Same problem, plus the architecture is closer to the cloud-workforce shape than to a personal-AI shape. |
| A hosted SaaS personal AI (for example, a hypothetical "Claude for Personal") | Fails the self-sovereign property. Maya's accumulated judgment lives on a platform that can read, modify, or revoke it. Tapscott's central commitment is violated. |
| Roll-your-own using Inngest + a custom front-end | Possible, but you spend the course teaching plumbing rather than teaching Identic AI. The course's value is in the architectural patterns, not the runtime mechanics. |
OpenClaw fits the requirements on each axis: it ships, it's open source, it stores the user's context on the user's filesystem by default, and it reaches the user through chat apps they're already in. For Course Eight's worked example, it's the runtime the thesis names. For your real deployment, the architectural patterns transfer to other runtimes; OpenClaw is the example, not the requirement.
If you can't use OpenClaw: transfer guidance. Compliance constraints, model-provider restrictions, or build-vs-buy preferences may put OpenClaw out of reach. The architectural patterns in Course Eight still apply; you'll just operationalize them on a different runtime. Here's how the load-bearing OpenClaw primitives map to what you'd build yourself:
OpenClaw primitive What you need to provide Suggested substrate The agent loop (Concept 4) A model-calling loop with tool execution Claude Agent SDK or OpenAI Agents SDK with a long-running daemon process Local-machine runtime (Concept 4) A process that survives reboots and stays running on the owner's hardware systemdon Linux,launchdon macOS, a Windows service, or a small Docker container the owner runs locallyThe sessionprimitive (Concept 5)Persistent context storage on the owner's filesystem A directory the daemon reads and writes; structured JSON or SQLite is enough; whatever schema you pick is yours Chat-app reachability (Concept 6) A way for the owner to message the agent and get responses A single chat-app bot (Telegram bots are simplest; the BotFather flow takes about 5 minutes). The 50+ integrations are OpenClaw's value-add, not a requirement of the architecture Skills and plugins system An extension mechanism for capabilities like "talk to Paperclip" Hand-written Python or TypeScript functions registered as tools in your Agent SDK The signing key (Concept 8) An ed25519 key pair stored on the owner's filesystem Standard library code: crypto.subtlein browser,cryptoin Node,cryptographyin PythonWhat changes versus Course Eight as written: you write more glue code (skill manifests, daemon supervision, chat-app integration). What stays the same: the trust-delegation primitive (Concepts 7 through 9), the governance ledger schema (Concept 11), the two-envelope intersection, the recalibration loop. Build effort to assemble a Course-Eight-equivalent on the Claude Agent SDK: roughly a weekend for an experienced engineer. The discipline is in the patterns, not the runtime.
Bottom line: OpenClaw is the open-source, user-owned personal AI runtime Course Eight teaches against. It runs on the owner's local machine, stores context on the owner's filesystem, and is reachable through the chat apps the owner already uses. As of May 2026, it is the credible shipped operationalization of Tapscott's self-sovereign Identic AI commitment: open source under an MIT license, with a large GitHub following and major-vendor sponsorship. The course's patterns transfer to other runtimes, but OpenClaw is the worked example.
Concept 5: Persistent memory and the owner's local context
The single property that turns OpenClaw from "another AI chatbot" into "Maya's Identic AI" is persistent memory. Without it, Maya would have to re-explain her judgment to OpenClaw on every interaction; the value of an Identic AI is precisely that it doesn't forget. Concept 5 walks through how OpenClaw's persistent memory works, where Maya's accumulated context actually lives, and what the architecture implies for Course Eight's central use case.
The session primitive. OpenClaw organizes persistent context around what its docs call the session: the unit of accumulated context for a user across time, across devices, across chat apps. Three things to know about the session model:
- Storage is local. Maya's session lives on Maya's hardware, in her filesystem, in her home directory under the OpenClaw config path. It is not synced to a cloud account by default. There is no "OpenClaw Cloud" that stores Maya's context. (Maya can opt in to syncing for multi-device use; Concept 13 walks the tradeoffs.)
- Storage is human-readable. The session is structured data on Maya's disk that she can inspect, edit, version with
git, encrypt, back up, or destroy at her discretion. Tapscott's self-sovereignty commitment is operationalized here as a filesystem property: Maya owns the files; the files are Maya's context. - Storage accumulates. Every interaction adds to the session. Maya messages OpenClaw on Telegram about a refund decision; that decision joins the session. She approves a hire over coffee through the menubar app; that approval joins the session. Six months in, the session contains hundreds of Maya's recorded decisions, preferences, communication patterns, and explicit instructions.
What gets persisted, concretely. OpenClaw's session captures more than just the chat transcript. From the docs:
| Persisted | What it is | Why it matters for Owner Identic AI |
|---|---|---|
| Conversation history | The literal exchange of messages between Maya and OpenClaw across chat apps | The raw record. Maya can re-read what she said and what OpenClaw did. |
| User preferences | Explicit statements Maya makes ("I prefer morning meetings"; "Don't approve anything over $5,000 without me") | Standing instructions the Identic AI applies thereafter |
| Skills installed and configured | Which OpenClaw skills Maya has installed; their configurations | The Identic AI's capability surface, including the Paperclip-integration skill Course Eight builds |
| Persona | Maya's named identity for OpenClaw (for example, "Maya's Lobster" or "Claudia") and the persona OpenClaw projects back | Continuity of identity across chats and devices |
| Activity log | What OpenClaw did and when: every skill invocation, every external API call, every decision | The audit trail that becomes Course Eight's governance ledger (Concept 11) |
| Derived patterns | The Identic AI's accumulated model of Maya's judgment, learned over time | The judgment-learning loop the course is centrally about |
What does not get persisted by default: ephemeral environmental state (which chat app Maya happened to use for a given message, the precise timestamp at millisecond resolution, and similar). These are recorded but not surfaced as part of the identity-relevant session. The distinction matters because Concept 13's question, what travels with Maya across devices and employers, is exactly the question of which subset of the session is identity-relevant.
Why filesystem-local storage is the load-bearing architectural commitment. This is the part of the OpenClaw architecture most easily missed. Many AI products advertise "persistent memory" while quietly meaning "we store your memory in our cloud, and you trust us with it." OpenClaw's choice, Maya's filesystem by default, is qualitatively different. It means:
- Maya can read her own context with
cat ~/.openclaw/session/*.json(or whatever the path is at the time of writing; consult docs.openclaw.ai/concepts/session for the canonical layout). - Maya can back up her context with the same tools she uses for any other files (Time Machine,
rsync, Git, encrypted external drives). - Maya can move her context to a new device by moving the files.
- Maya can destroy her context by deleting the files. There is no "delete request" to file with a vendor.
- Maya can encrypt her context at rest with her existing disk-encryption tools.
- Maya's context is not held hostage by any platform. If OpenClaw the project shut down tomorrow, Maya's accumulated context would still exist on her disk; she could read it, parse it, and load it into a different runtime.
This is the property that satisfies Tapscott's self-sovereignty commitment. It is also the property that makes Maya's Identic AI survive her changing her chat app, her model provider, even (with effort) her runtime. The session is Maya's; the runtime is configurable.
Maya has been using OpenClaw for six months. Her session directory on her Mac is about 240 MB. She switches employers: she sells her current AI-native company and starts a new one. Which of the following parts of her session should she expect to carry forward to her new company, and which should not?
Items: (a) her communication style and tone preferences, (b) her past approval decisions on Workers at the old company, (c) the Paperclip-integration skill's configuration pointing at the old company's API endpoint, (d) her standing instruction "always escalate envelope-extension hires", (e) the activity log of every action OpenClaw took at the old company.
Predict for each: travels with Maya / stays with the old company / it's complicated. Then read on.
Answer: (a) travels. Communication style is a personal pattern, not company property. (b) it's complicated. The patterns derived from those decisions ("Maya tends to approve hires with envelope extensions when X but not Y") travel; the decisions themselves (which were about specific Workers and specific issues at the old company) stay, as a matter of confidentiality. (c) stays. That skill's configuration is pointed at the old company's endpoint; the new company has different endpoints, possibly different auth. (The skill itself can travel as a recipe and be reconfigured.) (d) travels. That's a standing personal instruction; it applies anywhere. (e) stays. The activity log records actions taken at the old company against the old company's systems; it's audit data the old company has rights to, not data Maya owns.
This is the Harper Carroll seam from Course Seven's Concept 13, applied to Maya's session: patterns travel, specific records don't. The filesystem layout of the OpenClaw session makes this distinction enforceable. Maya can package and migrate the personal-patterns subset and leave the company-records subset behind.
Bottom line: Maya's Identic AI is Maya's session, a structured local store of her conversation history, preferences, skills, persona, activity log, and derived patterns. It lives on her filesystem, in human-readable files she owns. This is what makes the architecture self-sovereign and what makes Maya's accumulated judgment survive every change of device, employer, or chat app. The session is Maya's; everything else is configuration.
Concept 6: Chat apps as the interface layer
The non-obvious architectural choice OpenClaw makes is that it does not have its own user interface for the user-to-AI conversation. There is no OpenClaw web app, no OpenClaw mobile app for chatting. The user talks to OpenClaw through chat apps the user already uses: WhatsApp, Telegram, Discord, Slack, Signal, iMessage. Concept 6 walks through why this is the right architectural choice for an Identic AI and what it implies for the Course Eight worked example.
Why not a dedicated app. The default move for an AI product is to ship a chat UI of its own: a web app, a mobile app, both. OpenClaw deliberately doesn't. The reasoning, drawn from the project's design choices, is that an Identic AI should live where the user already lives, not where the product wants the user to live. The user is already in their chat apps all day: group threads with their team, DMs with their family, WhatsApp with their partner. Putting the Identic AI in those apps means:
- The user doesn't context-switch to talk to it.
- The user can include the Identic AI in group chats with other humans (Maya can add her OpenClaw to a board-discussion Slack channel; her Identic AI participates as a peer).
- The user's existing notification system handles attention routing (Maya's phone buzzes for OpenClaw the same way it buzzes for messages from her team).
- The user has no separate app to remember to open.
- The Identic AI inherits all the conveniences of the chat apps: voice messages, file attachments, group threads, search history.
The chat-channel integrations are an architectural commitment, not a feature list. OpenClaw lists 50+ integrations, including 15+ chat channels (WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and more) at openclaw.ai/integrations. The chat-channel set is the part that matters here. The user picks their preferred chat app or apps during openclaw onboard; OpenClaw integrates with those; the user starts messaging. There is no learning curve for the chat interface because the user already knows how their chat app works.
What this means for Maya. In the Course Eight worked example, Maya configures OpenClaw to be reachable through Telegram. (We pick Telegram because the OpenClaw docs use it as the default example, and because Telegram has clean bot semantics. Maya could equally use WhatsApp or Signal.) Maya names her OpenClaw "Claudia" during onboarding. From then on:
- When Maya wants to message her Identic AI, she opens Telegram, finds the chat with Claudia, and types. Same gesture as messaging a teammate.
- When her Identic AI wants to surface something to her (an approval that needs her judgment, a weekly governance-ledger summary), it sends her a Telegram message. Same notification flow as any other message.
- When Maya is at her desk, she can use OpenClaw's macOS menubar app instead of switching to Telegram. The session is shared.
- When Maya travels and uses her phone, the same Claudia is there. The session syncs across her devices (with the architectural caveats from Concept 13 about how that sync works).
The implication for the trust-delegation problem. This is subtle and worth noting before Part 3 walks into it. Because OpenClaw lives in chat apps the user already trusts, the user-to-OpenClaw trust is inherited from the chat app's own auth. Maya is already logged in to Telegram as Maya; she is already the recipient of messages sent to her Telegram account; OpenClaw doesn't need to re-authenticate her at the user-to-AI boundary. The trust-delegation problem in Course Eight is therefore not "how does OpenClaw know it's Maya", which is solved by Telegram's own auth, but "how does the Paperclip management layer know that an approval request originating from Maya's OpenClaw is genuinely Maya-authorized." That's Concept 7's problem.
Paste this into your AI coding assistant:
"OpenClaw reaches the user through chat apps as its primary interface: WhatsApp, Telegram, Discord, Slack, Signal, iMessage, and more. The architectural commitment is that the Identic AI lives where the user already lives, not in a separate app. From the user's perspective, this is a clear win: zero context switching, existing notifications, group chats. From a security perspective, list three things the user should verify about their chosen chat app before treating their OpenClaw conversation as trustworthy for governance decisions. For example: is the chat encrypted end-to-end? What happens if the chat-app provider is compelled to hand over messages? What is the recovery story if the user's chat-app account is compromised?"
What you're learning: the chat-app interface inherits trust from the chat app, and that trust has real properties Maya should verify. Telegram's end-to-end encryption is opt-in (only for "Secret Chats"); WhatsApp is end-to-end by default; iMessage is end-to-end within Apple's ecosystem. Each choice has implications for whether the Paperclip integration is treating chat-app messages as cryptographic evidence of Maya's intent (it should not) or as a convenience channel (it should). The signed-delegation primitive in Concept 8 is what makes governance decisions cryptographically grounded regardless of which chat app routes them.
Bottom line: OpenClaw's chat-app-first interface architecture is a deliberate choice: the Identic AI lives where the owner already lives, not in a separate app. For Maya, this means Telegram (or WhatsApp, or any of the supported chat channels) becomes her interface to her Owner Identic AI. The architectural implication for trust delegation is that user-to-OpenClaw auth is inherited from the chat app; the load-bearing trust problem is on the OpenClaw-to-Paperclip boundary, which Concept 7 takes up.
Part 3: Trust delegation and governance
Parts 1-2 named the problem (Maya can't read approvals at scale), the architectural primitive that solves it (Identic AI), and the runtime that operationalizes it (OpenClaw). Part 3 takes up the load-bearing technical move of Course Eight: how the company's Paperclip management layer can safely accept approval decisions from Maya's Identic AI without abandoning the safety property the entire seven-invariant thesis depends on. Three Concepts.
Concept 7: The trust-delegation problem
When Maya logs into Paperclip and clicks "approve" on a hire proposal, the Manager-Agent records that the human owner approved. When Maya's Identic AI ("Claudia") clicks "approve" on a routine hire proposal at 3 AM while Maya is asleep, the Manager-Agent records that something approved. The question Concept 7 takes up is what should be different about these two cases and what should be the same.
The naive answers are both wrong:
Naive answer A: treat them the same. "Claudia is authorized; her click is Maya's click; record it as Maya-approved." This collapses the distinction and produces an audit trail that lies. Six months later when Maya is reviewing how a particular decision was made, she cannot tell from the activity log whether she made it herself or her Identic AI made it on her behalf. The information needed for recalibration (Concept 12), namely did my Identic AI handle this well, or do I need to correct it?, is gone. Truthful auditing requires distinguishing the two principals.
Naive answer B: always require the human. "Claudia can't approve anything; she can only message Maya the approval requests, and Maya clicks 'approve' herself." This is the architecture without an Identic AI; we have not made progress. The whole point of Part 1's argument is that Maya can't be in the loop on every routine approval. An Identic AI that can only relay messages is not solving the scaling problem.
The correct answer is structurally a three-part move:
- The Identic AI has its own identity in the system. Maya's OpenClaw, configured as Claudia, has a distinct identity that Paperclip recognizes: Claudia is registered as a Paperclip agent, with her own key. Claudia is not a user account; she is a delegated agent of Maya's. This is a new principal type the previous courses didn't need.
- The Identic AI's authority is a subset of Maya's authority, set by Maya, with clear limits. Maya can approve anything in the owner-authority envelope; Claudia can approve a configured subset (Concept 9 walks the intersection). The subset is recorded: Maya knows what Claudia is allowed to do; Claudia knows; Paperclip stores it.
- The audit trail records which principal acted, and you have to be honest about where that record lives. Here is the part that is easy to get wrong, so the course states it plainly up front. Against real Paperclip 2026.513.0, the approval routes are board-scoped: an approve, reject, or request-revision call is recorded as a board action, so Paperclip's own
activity_logwritesactor_type='user'whether Maya clicked it or Claudia drove it. Paperclip does not natively distinguish owner-human from owner-identic-ai on an approval. So the two-principal distinction lives in the course's owngovernance_ledgertable: every decision Claudia makes writes agovernance_ledgerrow carryingprincipal='owner_identic_ai', her attestation, and her reasoning. A Maya-resolved approval has no such row. Join the two tables on the approval id and the distinction is fully recoverable. (The simulated-track mock takes a shortcut: it implements nativeactor: owner_identic_aiattribution directly, purely as a teaching simplification. Real Paperclip does not, and the full-implementation track is honest about that throughout Part 4.)
This three-part move is what makes delegated governance safe rather than reckless. The principle holds in both tracks: two principals, one human, distinct audit truth. What differs is the mechanism: native attribution in the mock, the governance_ledger against real Paperclip. Maya delegates, but doesn't disappear: the audit record makes her always recoverable.

Two principals, one human: against real Paperclip the distinction lives in the course's own governance_ledger, joined to Paperclip's activity_log on the approval id, so every delegated decision stays recoverable. The mock implements it natively as a teaching simplification.
Where this is genuinely hard. The naive answers fail in obvious ways, but the correct answer has a genuinely hard implementation question buried in it: how does the Paperclip management layer verify that a request claiming to be from "Maya's Identic AI" is in fact from Maya's Identic AI, and not from some other process pretending to be? If anyone can post actor: owner_identic_ai, principal: maya to the approval API and have it accepted, the whole architecture is broken. The verification mechanism, signed delegation from local credentials, is what Concept 8 takes up.
Bottom line: the trust-delegation problem is the load-bearing technical move of Course Eight. Two principals are authorized, the human owner and the owner's Identic AI, and the architecture has to distinguish them in the audit truth, bound the Identic AI's authority as a subset of the owner's, and verify the Identic AI's identity cryptographically. Neither "treat them the same" nor "always require the human" works. The three-part move is distinct identity, bounded authority, and truthful audit. The honest detail: against real Paperclip the approval routes are board-scoped, so the two-principal distinction is not carried by Paperclip's
activity_log; it lives in the course's owngovernance_ledger. The simulated mock implements native attribution as a teaching simplification. The principle is identical in both tracks; the mechanism differs.
Concept 8: Signed delegation from local credentials
The next two Concepts and Decisions 4-5 use signed-delegation primitives. If "ed25519" and "signature verification" don't already sit in your toolkit, here's the 90-second version: an ed25519 key pair is a pair of related files, a private key (a secret your machine holds, about 32 bytes) and a public key (a non-secret derived from it, also about 32 bytes). When your machine signs a payload, it produces a short string (the signature) that anyone holding your public key can verify mathematically, proving the payload came from whoever holds the private key, without revealing the private key. ed25519 is the modern default: small keys, fast operations, secure enough that the entire web's TLS infrastructure is moving to it. Canonical JSON encoding matters because the sign-and-verify operation is over exact bytes, not over "the same data": if signer and verifier serialize the JSON differently (for example, key ordering), the signature won't verify even though the data is logically identical. Standard practice is to sort keys and strip extra whitespace before signing. All three primitives, ed25519, signature verification, and canonical JSON, ship in standard Node and browser libraries. Claude Code or OpenCode handles the implementation details in Decisions 4-5; this primer is so the briefings read as engineering, not magic.
The verification mechanism uses primitives that ship in May 2026: filesystem-stored signing keys, optionally hardware-backed via platform keystores (macOS Keychain, Windows Credential Manager, Linux libsecret), combined with passkey/WebAuthn-style cryptographic challenges on the Paperclip side. This is one of the parts of Course Eight that wires together building blocks both products ship rather than using a single shipped integration; the building blocks are stable, the wiring is what the course teaches you to assemble.
The signing key on Maya's machine. During the Course Eight lab (Decision 1), Maya's OpenClaw is configured with a fresh cryptographic key pair. The private key lives in Maya's home directory (or, optionally, in her platform keystore). The public key is registered with the Paperclip management layer as belonging to "Maya's Identic AI." From then on:
- When Maya's OpenClaw makes a request to Paperclip's approval API on Maya's behalf, OpenClaw signs the decision payload with the private key.
- The course's own delegation layer (Decision 5) verifies the signature with the registered public key. (Paperclip itself has no signature field on the approval routes; the ed25519 attestation is the course's own layer, not something Paperclip checks. The signature still does real work: it proves the decision came from Claudia's key and was not tampered with before the delegation layer drives the real Paperclip route.)
- If the signature is valid, the delegation layer knows the request is genuinely from Maya's registered Identic AI.
- The request is processed through the bounded-authority check (Concept 9); the delegation layer then calls the real board-scoped approval route, and writes a
governance_ledgerrow recordingprincipal: owner_identic_ai. (Paperclip's ownactivity_logrecords the approval as a board action,actor_type='user'; the owner-human vs owner-identic-ai distinction is in thegovernance_ledger. Concept 7 and Part 4 cover this in full.)
Why filesystem (or platform keystore) and not "the cloud." The signing key is Maya's; it should live where Maya can control it. Filesystem storage means Maya can back it up, copy it to a new device, or destroy it the same way she controls the rest of her OpenClaw session. Platform-keystore storage (macOS Keychain and the like) adds hardware backing: the key cannot be exported without Maya's OS authentication, which raises the bar for an attacker who gains filesystem access but not user-session access. Both are local-by-default; neither leaks Maya's identity to a third party.
The stolen-laptop failure mode. What happens if Maya's Mac is stolen? Three failure scenarios, ordered by severity:
- Laptop stolen, Maya not logged in, FileVault encrypted. The attacker has the device but cannot read the filesystem. The private key is unreadable. No attack surface; Maya replaces the laptop and re-onboards on a new one.
- Laptop stolen, Maya logged in but the screen is locked. The attacker has the device, the disk is decrypted, but they cannot unlock the screen. The private key is on disk but the attacker cannot trigger OpenClaw to use it. Some risk if the attacker has filesystem-level access (for example, booting from an external drive); the platform keystore approach raises this bar.
- Laptop stolen, attacker has Maya's session active. The attacker can talk to Maya's OpenClaw, can sign requests as Maya's Identic AI, can approve things at Paperclip up to Claudia's delegated authority. This is the worst case. The mitigation is revocation at the Paperclip side: Maya, from a different device, registers a new public key and revokes the stolen one. The course's lab (Decision 7) walks this revocation flow.
The recovery story is the load-bearing detail. A signed-delegation architecture that has no recovery path for a stolen device is fragile. The course teaches the revocation flow as part of Decision 7 because that's where the architectural commitment becomes operational: Maya can always, from any device she still controls, log into Paperclip (with her own owner-human credentials, not her Identic AI's credentials) and revoke any Identic AI key that's been compromised.
The relationship to passkeys. Maya's owner-human credentials in Paperclip are themselves protected by passkeys / WebAuthn, the same primitive the broader web is moving toward in 2026. Maya signs into Paperclip with her passkey (which is bound to a device she controls, hardware-backed where possible). The Identic AI signing key is a separate credential: it represents Claudia, not Maya, and Maya can revoke it from her passkey-authenticated session. Two credentials, two principals, one human; the architecture distinguishes them cleanly.
Maya's company has a published API at https://maya-co.com/api. Someone outside the company tries to send an approval-acceptance request claiming to be Maya's Identic AI. The request includes a signature field. Without the registered public key on file, the request would be impossible to verify. Predict: in the order your delegation layer's verification logic runs (Decision 5's three gates), which check fails first? (a) signature cryptographic validity, (b) signer-key-is-registered, (c) bounded-authority-check (Concept 9), (d) governance-ledger write. Then read on.
Answer: (b). The first thing your delegation layer checks is whether the public key the signature was made with is a registered Identic AI key for a known owner. If the signer key is not on file, the layer rejects the request before doing any cryptographic verification. (You can verify the signature cryptographically, but if it's signed with an unknown key, the result of verification doesn't help: a valid signature from an unknown key is still an unknown principal.) The first gate is identity registration; cryptographic validity is the second gate. This ordering matters because it prevents attackers from spending CPU on signature verification for arbitrary inbound requests; only requests from registered principals get that far. (Note this verification is the course's own delegation layer, not Paperclip: Paperclip has no signature field on the approval routes. Concept 7 and Decision 5 cover why.)
Bottom line: signed delegation lets Maya's Identic AI act on her behalf at Paperclip without ambiguity about which principal acted. The signing key lives on Maya's machine (filesystem or platform keystore); the public key is registered so the course's own delegation layer can verify Claudia's attestations; that layer writes the delegated principal into the
governance_ledger. The stolen-laptop case is handled by revocation: Maya can always invalidate her Identic AI's credentials from any device she still controls. The architecture is robust in the failure mode that matters.
Concept 9: The two-envelope intersection
Maya's authority envelope, in Paperclip's model, is what she can approve: anything in the company's authority surface, up to and including envelope-extension hires, terminations, and unlimited refunds. That is not the right envelope for Claudia. Claudia's delegated envelope is a deliberately narrower subset of Maya's, set by Maya, and recorded at Paperclip as part of the registered Identic AI's metadata. Concept 9 walks the intersection logic: what gets executed when Claudia acts is not what Claudia claims authority for, and not what Maya could do; it is the intersection of the two.
The two envelopes, concretely. For the Course Seven customer-support company, Maya's envelopes look something like this:
| Maya's owner-authority envelope (what Maya herself can approve) | Claudia's delegated envelope (what Maya's Identic AI can approve on her behalf) |
|---|---|
| Refunds: unlimited | Refunds: up to $2,000 |
| Hires within existing envelope: yes | Hires within existing envelope: yes |
| Hires extending company envelope: yes | Hires extending company envelope: no, surface to Maya |
| Terminations: yes | Terminations: no, surface to Maya |
| Budget overrides: unlimited | Budget overrides: up to 20% over monthly cap |
| Standing-policy edits: yes | Standing-policy edits: no, surface to Maya |
| CMA migrations: yes | CMA migrations: up to one per quarter, only for Workers below $1,000/mo budget |
The pattern: Claudia can act autonomously on routine, bounded decisions; she defers to Maya on strategic, irreversible, or category-changing decisions. Maya sets these thresholds during the Course Eight lab (Decision 4) and revisits them quarterly.

An action executes only when it sits inside both envelopes at once: the strict intersection, never the union.
The intersection logic. When Claudia attempts to act, Paperclip computes:
- Is this action in the owner-authority envelope? (That is, would Maya herself be allowed to approve it?) Yes for everything in this example, because Maya owns the company; no for, say, a request that exceeds the company's overall budget cap, which Maya cannot unilaterally approve either.
- Is this action in the Identic AI's delegated envelope? (That is, is Claudia allowed to act on this?) Yes for refunds up to $2,000; no for envelope-extension hires.
- The action proceeds only if both are yes. If only (1) is yes and (2) is no, the action surfaces to Maya as an approval request she handles herself.
Why the intersection is the right model, not the union. Maya's authority is a ceiling, not a transferable bundle. Claudia cannot exceed Maya's authority; if Maya can approve up to $X, Claudia cannot approve more. But Claudia is also bounded below Maya's ceiling by Maya's deliberate choice: Maya might say "I trust Claudia with up to $2,000 in refunds, but $5,000 refunds I want to see myself." The architecture has to encode both bounds, and the intersection is the only logic that does.
The standing-instructions surface. Maya can extend or contract Claudia's delegated envelope by giving Claudia standing instructions. Examples Maya might give over six months:
- "Approve any refund up to $2,000 if the customer's account is more than two years old and they have no prior refunds."
- "Always surface envelope-extension hires to me, even if the new authority is something we've granted before."
- "For the next month, while we're testing the new product line, surface any hire to that product line to me."
- "If a Worker's budget overrun is correlated with a known incident in the activity log, you can approve the override up to 50%, but flag it in the weekly governance-ledger summary."
These instructions accumulate in Maya's session as part of the persistent memory (Concept 5). The Identic AI applies them in conjunction with the delegated-envelope ceiling. The result is that Maya's policy is learned and updated in plain language, not encoded once in a static config file. This is the Concept 1 distinction between a rule and a judgment, operationalized.
Bottom line: the action Claudia executes is the intersection of Maya's owner-authority envelope (the ceiling) and Claudia's delegated envelope (Maya's chosen subset). The intersection is the only logic that correctly bounds an Identic AI's authority: it cannot exceed the owner's, and it cannot exceed what the owner has chosen to delegate. Maya extends and contracts the delegated envelope through standing instructions she gives Claudia in plain language, making policy a judgment Maya updates over time, not a rule encoded once.
Part 4: The worked example: wiring Maya's Identic AI
Parts 1-3 explained the architecture. Part 4 walks through assembling it concretely. Seven Decisions, each one a briefing to your Claude Code or OpenCode session, never typed or edited by hand. By the end of Part 4, your Owner Identic AI is installed on a Mac, reachable through Telegram, and configured with a delegated envelope. What "demonstrably handling routine approvals" means depends on your track: on the full-implementation track, Maya's Identic AI is wired to her real Course Seven Paperclip deployment and exercises the full cryptographic round-trip against live approval routes. On the simulated track, you exercise the same architectural patterns against a local Paperclip mock: you finish with a working demo and a sound mental model, but the production wiring to a real Paperclip is not part of the simulated lab. Both tracks end with an Identic AI that handles a flood of routine approvals and surfaces the consequential ones; they differ in whether the Paperclip on the other end is real.
Course Eight's lab runs two ways. Pick before Decision 1: the choice affects how you read every briefing, how much time you commit, and what you end up with. Don't switch mid-lab; the wiring won't stay consistent.
- Full implementation (for owners of an actual Paperclip-running AI-native company). You do not modify Paperclip's codebase. You mint a real Paperclip board API key (from
board_api_keys) and hand it to your Identic AI: that board credential is what authenticates her calls to the real approval routes (POST /api/approvals/{id}/approveand friends). You also register her as a real Paperclip agent with anagent_api_keysentry, which gives her a Paperclip identity and a revocation surface (not what authenticates the approve call). You build your own signing, signature-verification, and delegated-envelope layer inside the OpenClaw integration skill, plus your own additivegovernance_ledgertable, and exercise the full cryptographic round-trip against a live Paperclip deployment. Time: 6 to 10 hours of lab on top of 3 hours of reading; realistically a 1-day sprint, or 2 days with thinking room. Output: a production-grade Owner Identic AI for your real company. - Simulated (for everyone else: learners, students, people without a deployed Paperclip). You run a mock Paperclip endpoint that accepts a simplified API shape, returns canned responses, and writes to a local
governance_ledger.json. The architectural patterns are exercised; the production wiring is not. Time: 2 to 3 hours of lab on top of 2 hours of reading; a comfortable half-day. Output: a working understanding of Owner Identic AI plus a local demo.
Most Decisions work for both tracks with the same briefing. Decisions 3, 5, 6, and 7 genuinely diverge: those carry a labeled Simulated track block and Full-implementation track block. The simulated mock keeps simplified routes (including a single /resolve route); the full track uses the real Paperclip surface.
The one thing the track choice changes conceptually: where the two-principal distinction lives. Concepts 7-9 already taught this, so it is a cross-reference, not new material. Against real Paperclip 2026.513.0, the approval routes are board-scoped: every activity_log row they write is actor_type='user', so Paperclip cannot tell a Claudia-resolved approval from a Maya-resolved one. The distinction lives in the course's own governance_ledger. The simulated mock implements native owner_identic_ai attribution directly, as a teaching simplification. The full-implementation track below is honest about this throughout.
Lab Setup: before Decision 1
The Decisions below are written to be executed through Claude Code or OpenCode (your agentic coding tool; see the Agentic Coding Crash Course if either is unfamiliar). You do not type or edit code manually anywhere in this lab. Every Decision is briefed to your agentic coding tool, which produces a plan, you review and approve the plan, and then the tool implements it. This is the same discipline Courses Three through Seven used.
If you skip this setup and try to drive the lab with a generic chat AI, three specific failure modes will bite you: (1) the chat AI can't read or edit your filesystem, so every code artifact has to be copy-pasted in and out of the chat, doubling lab time; (2) there's no Plan Mode equivalent, so the AI will start writing code before you've reviewed the approach, and you'll spend hours undoing wrong directions; (3) there's no project-rules file the AI reads on every session, so each session re-learns the constraints (don't touch the production governance_ledger; always run tests before committing; never edit course-seven-export/ since it's read-only), and you'll repeat yourself constantly.
Setup is two moves: install your coding agent, then download the starter project. About 10 minutes total.
1. Install Claude Code or OpenCode
Pick one. Both work for the entire lab; pick based on your model preference and how much config control you want. (If you already have one installed, skip to step 2, but run the upgrade command to make sure you're on the latest version.)
# macOS / Linux / WSL: recommended (auto-updates)
curl -fsSL https://claude.ai/install.sh | bash
# Or via Homebrew (no auto-update)
brew install --cask claude-code
# Verify and update
claude update
claude --version
Full installation reference: docs.claude.com/claude-code.
2. Download the starter project
Everything else the lab needs is pre-wired in a starter project. Download it, unzip it, and you have a real git init-able course-eight-lab/ folder with the project rules file, the permissions and guardrails, the reusable verification commands, the simulated-track mock, and Maya's read-only approval history already in place.
Download: identic-ai-crash-course.zip
The earlier draft of this course had you hand-type roughly 350 lines of config across six setup steps. That config now ships in the zip, so you read it and run it instead of transcribing it. Here is the layout, and why each piece matters:
identic-ai-crash-course/
├── README.md # human entry: what this is, pick a track, next step
├── CLAUDE.md # the project rules file (Claude Code)
├── AGENTS.md # the project rules file (OpenCode)
├── opencode.json # OpenCode: instruction files + permissions + plugin wiring
├── .claude/
│ ├── settings.json # permissions allow/deny + 3 PreToolUse guardrail hooks
│ └── commands/ # /verify-audit-trail and /verify-envelope slash commands
├── .opencode/
│ ├── plugins/course-eight-guardrails.js # the same 3 guardrails as an OpenCode plugin
│ └── commands/ # the two slash commands for OpenCode
├── mocks/
│ └── paperclip-mock.ts # the simulated-track Paperclip stand-in
├── course-seven-export/
│ ├── README.md # read-only input, do not edit
│ └── approvals.json # a sample of Maya's past approval decisions
└── docs/
├── course-eight-architecture.md # architectural background the rules file @-references
├── governance-ledger-schema.sql # the schema Decisions 5 and 11 build against
└── openclaw-skills-reference.md # OpenClaw skills system pointer notes
What each piece is for:
CLAUDE.md/AGENTS.md(the project rules file). This is load-bearing context: your coding agent reads it at the start of every session, so it knows the stack, the two lab tracks, the critical rules (never editcourse-seven-export/, never print the signing key, runnpm testafter Paperclip-side changes, the daemon-stop step in Decision 7 is mandatory), and where the on-demand reference docs live. Every Decision below assumes these rules are in place.opencode.jsoncarries the same instruction-file wiring for OpenCode.- The permissions block (
.claude/settings.jsonallow/deny,opencode.jsonpermission) plus the hooks/plugin. Together these are the deny-and-refuse safety layer. The permissions block says "ask before doing dangerous things." The hooks (.claude/settings.jsonPreToolUse) and the OpenCode plugin (.opencode/plugins/course-eight-guardrails.js) say "actually, just refuse these things, never even ask." Three guardrails are deterministic rather than agent-judgment: refuse to commit agovernance_ledger.jsoncarrying a production DB pointer, refuse edits to the read-onlycourse-seven-export/, and refuse to commit a.pemprivate signing key. The combination is the right safety property: the most catastrophic possible mistake (leaking Maya's private signing key to a public repo) is structurally blocked. - The slash commands (
.claude/commands/,.opencode/commands/)./verify-audit-trailand/verify-envelopeare reusable verification workflows saved once so you don't re-type the instructions. You'll run the audit-trail check after Decision 6 and again after Decision 7; you'll run the envelope check whenever you want to confirm the local envelope still matches what Paperclip has registered. mocks/paperclip-mock.ts. The simulated-track Paperclip stand-in. It ships with the read side working: a healthcheck, the pending-approval queue, a basic human-resolve path, a test helper that injects a workload, and thegovernance_ledger.jsonwriter. The signed-delegation gating and the identity-registration endpoints are deliberately left as marked stubs, because building them is Decisions 4 and 5. Do not pre-fill them; the lab walks you through it.course-seven-export/approvals.json. Maya's past approval decisions: the read-only judgment input Decision 2 imports so Claudia starts with a model of Maya's judgment instead of learning from zero. The guardrails actively refuse edits here; modifying it would mean Claudia's seeded patterns no longer reflect a real history.docs/. Background the rules file@-references on demand: the architecture context, the governance-ledger schema Decisions 5 and 11 build against, and orientation notes on the OpenClaw skills system. The agent loads these only when a Decision needs them, not preemptively.
Your job: unzip the project, then in a terminal:
cd identic-ai-crash-course
git init
The git init is non-negotiable for OpenCode users (its /undo feature requires git). It's strongly recommended for Claude Code users too: commits are how you save progress between Decisions, and the daemon-stop fix in Decision 7 assumes the lab is git-tracked.
Then open the folder with your coding agent:
- Claude Code:
claude - OpenCode:
opencode
Verifying setup
Before starting Decision 1, run a quick sanity check inside Claude Code or OpenCode to confirm the rules loaded.
In the Claude Code prompt, type:
What rules are you following for this session? List any instructions
from my project's CLAUDE.md file, and confirm the hooks block in
.claude/settings.json loaded.
You should see Claude Code list the lab rules from CLAUDE.md and confirm the three PreToolUse hooks are active. If it says it has no special instructions or the hooks aren't loaded, recheck that you opened claude from inside the identic-ai-crash-course/ directory.
The Plan-then-Execute discipline
Every Decision below has two phases: Plan (read-only investigation; produce a written plan; you review) and Execute (the tool implements after you approve the plan). This is non-negotiable for three reasons:
- A plan you review takes 30 seconds; reverting a wrong implementation takes much longer.
- Plans saved to
docs/plans/decision-N.mdsurvive/clearand can be resumed across sessions. - The plan-then-execute split lets you save tokens (and money) by planning on a frontier model and executing on a cheap one (see the agentic-coding crash course's Plan/Execute composition pattern).
For each Decision: press Shift+Tab twice to enter Plan Mode (read-only). Brief the requirements. Review the plan Claude Code produces. Ask for the plan to be saved to docs/plans/decision-N.md. Then press Shift+Tab to exit Plan Mode and authorize execution.
The Decisions below describe the brief you give to the tool: what to plan and execute. They do not repeat the Plan-then-Execute workflow each time; that's now your standing operating procedure for the lab.
Decision 1: Install OpenClaw on Maya's Mac
In one line: install OpenClaw on a clean Mac, run
openclaw onboard, walk the persona/model/chat-app setup, and verify the install works by messaging Claudia a "hello" through Telegram.
Everything downstream depends on a clean OpenClaw install. Lab failures in Decisions 2-7 trace back to a partially-completed Decision 1 about 80% of the time. The persona name you pick here is the name Claudia will use forever in your governance ledger; the model you pick affects Claudia's reasoning quality on novel cases; the chat-app integration is your only interface to her. Treat this as the foundation decision, not a setup step.
The brief. In your agentic coding tool, switch to plan mode, paste the brief below, ask the tool to produce a written plan and save it to docs/plans/decision-1.md, review it, then switch out of plan mode to execute.
I need to install OpenClaw on a fresh Mac and get through the onboarding flow. Per the official site at openclaw.ai and docs at docs.openclaw.ai/getting-started. Requirements:
- Install OpenClaw. Run
curl -fsSL https://openclaw.ai/install.sh | bash(auto-installs Node.js and dependencies), ornpm i -g openclawif Node.js is already set up.- Run
openclaw onboard --install-daemonand walk through the prompts. The--install-daemonflag registers the gateway as a managed background service, so theopenclaw gateway stopandopenclaw gateway startcontrols used in Decision 7 actually have a service to act on.- Persona configuration: name the OpenClaw
Claudia.- Model selection: pick
claude-opus-4-7as the model.- Chat-app integration: configure Telegram as the primary chat app. This requires creating a Telegram bot via
@BotFatherand pasting the bot token into the onboard flow.- Verification round-trip: send Claudia a
hellomessage in Telegram and confirm she responds with the configured persona greeting.- Report back: the path to the OpenClaw config directory (typically
~/.openclaw/), the Telegram bot username, and any onboard prompts that asked for choices beyond what's specified above (so I know what defaults got picked).
What to expect. Your assistant produces the install + onboard flow, verifies the Telegram round-trip, and reports back the configured state. The output should include:
- The path to the OpenClaw config directory on Maya's Mac (typically
~/.openclaw/) - The bot username Maya will use to reach Claudia in Telegram
- The model and persona configuration confirmed
- A successful round-trip: Maya messages "hello", Claudia responds in persona
Troubleshooting:
- Homebrew prompts for admin password on first run. Expected: required for system dependencies. Approve and continue.
- Telegram BotFather setup fails. Most common cause: Maya doesn't have a Telegram account. Create one first (free, 2-minute setup), then re-run
openclaw onboard. - Claudia responds but the persona is wrong. The onboard flow's persona configuration didn't save. Re-run onboarding with the config scope reset:
openclaw onboard --reset --reset-scope config. - Install hangs on "downloading dependencies." Network or DNS issue. Verify with
curl -fsSL https://openclaw.ai/healthand check Node.js install separately if that fails.
Bottom line of Decision 1. A working OpenClaw install where Maya can message Claudia in Telegram and get a persona-aware response. The config directory exists on disk; the bot is registered with Telegram; the model is set. Everything downstream wires onto this foundation.
In this order; they account for roughly 80% of lab failures during OpenClaw + Paperclip integration.
- OpenClaw not installed cleanly or onboarding incomplete. Run
openclaw statusto verify; re-runopenclaw onboardif anything looks off. The most common cause of mysterious failures in Decisions 2-7 is a partially-completed onboard from Decision 1. - Paperclip not running, or not reachable from Maya's machine. OpenClaw running on Maya's Mac has to be able to reach Paperclip's API endpoint. If Paperclip is on a different machine or a private network, verify connectivity with
curl $PAPERCLIP_API_URL/api/healthbefore assuming the integration skill is broken. - Identic AI's board API key not in place, or the delegation layer incomplete. In the full track, Claudia drives the real approval routes with a Paperclip board API key (Decision 4): the approval routes call
assertBoard(), so a board-level credential is what they accept. If that board key is missing or wrong, Paperclip rejects her calls before your delegation layer ever runs. Separately, she is also registered as a Paperclip agent with anagent_api_keysentry, which gives her an identity and a revocation surface but is not what authenticates the approve call. Your own verification layer (Decision 5) checks the ed25519 attestation and the delegated envelope; if that layer is incomplete, Claudia's decisions never reachPOST /api/approvals/{id}/approve.
If none of the three explains your failure, check the course-eight-issues GitHub label.
Decision 2: Onboard Maya's persona and import her judgment context
In one line: configure Claudia with Maya's persona, role context (she's the CEO of an AI-native customer-support company), and import any available history of her past approval decisions so Claudia has a starting model of Maya's judgment.
Concept 10 names three layers of context Claudia accumulates over time: standing instructions, per-decision feedback, derived patterns. Decision 2 seeds all three layers from the historical record. Without this seed, Claudia starts from zero and Maya has to spend her first month teaching Claudia from scratch through overrides. With this seed, Claudia starts month one with roughly 200 imported decisions worth of pattern-matching; Maya's first-month overrides refine an already-credible starting model.
Suppose Maya doesn't have a clean JSON export of her past approvals: she's been running for nine months on Paperclip but has never extracted her decision history. Three options for seeding Claudia: (a) skip the import; let Claudia learn from scratch over the next roughly 30 days, (b) export a partial history (the last 30 days, easy to compile from the Paperclip activity log), or (c) spend a Saturday writing the full export script. Predict which option Course Eight recommends, then read the briefing.
Answer: (b), partial history of the last 30 days. Option (a) wastes the first month; Maya does the work of teaching Claudia without the benefit of having Claudia. Option (c) is the right answer if Maya has the time, but realistically most owners won't. Option (b) is the pragmatic middle: 30 days of imported history gives Claudia a defensible starting model on the high-volume decision types (refunds, budget overrides) while leaving low-volume decisions (envelope-extension hires, terminations) explicit per the standing instructions in Decision 4. The course's briefing assumes (c) is achievable; if it isn't, fall back to (b).
The brief. In your agentic coding tool, switch to plan mode, paste the brief below, ask the tool to produce a written plan and save it to docs/plans/decision-2.md, review it, then switch out of plan mode to execute.
Claudia is installed; now configure her for Maya's specific role. Maya is the founder/CEO of an AI-native customer-support company built across Courses Five-Seven. The company has four Workers: Tier-1 Support, Tier-2 Specialist, Manager-Agent, and Legal Specialist, managed through Paperclip. Requirements:
- Build a persona document in OpenClaw's agent workspace (
~/.openclaw/workspace/). OpenClaw readsUSER.mdin the workspace as durable context about its user, so write Maya's persona into~/.openclaw/workspace/USER.md(or acontext/file referenced from it), capturing:
- Maya's role (founder/CEO of an AI-native customer-support company)
- The company's current Worker roster and what each Worker does
- Maya's communication style (terse, direct, comma-spliced)
- Maya's known policy thresholds from Courses Five-Seven (refund ceiling
$500, monthly budget cap default$1,000, auto-approval policy from Course Seven Concept 9 active)- Import Maya's past approval history. There's a JSON export at
./course-seven-export/approvals.jsoncontaining historical approval decisions. OpenClaw's persistent memory is the workspace, not a separate import store, so place the file in the workspace (for example~/.openclaw/workspace/context/approvals.json) and reference it fromUSER.mdorMEMORY.mdso OpenClaw indexes it as durable context. See docs.openclaw.ai/concepts/session.- Verification round-trip: through Telegram, ask Claudia "what do you know about how I approve refunds?" and confirm she returns a plausible answer drawn from the imported history (not generic AI-assistant patter; the response should reference specific patterns from the imported decisions).
- Report back: the import skill's name, the on-disk location of the loaded session data, and a transcript of the verification round-trip.
What to expect. Your assistant produces:
- Maya's persona written into
~/.openclaw/workspace/USER.md(human-readable; Maya can review and edit) approvals.jsonplaced in the workspace as a durable context file, referenced fromUSER.mdorMEMORY.mdso Claudia draws on it- A verification round-trip: Maya messages "what do you know about how I approve refunds?" and Claudia responds with a summary like "Based on your past decisions, you approve about 94% of refunds under $500 without comment, about 70% in the $500 to $1,500 range with occasional surfacing, and surface nearly all over $1,500..." The exact phrasing varies; what matters is that Claudia draws on the imported data, not generic AI-assistant patter.
Troubleshooting:
- No
approvals.jsonexport available. Use the PRIMM-predicted fallback (b): write a smaller script to pull the last 30 days from Paperclip'sactivity_logtable, filtering foractor_type = 'user'andactionin (approval.approved,approval.rejected,approval.revision_requested); the per-decision detail is in thedetailsjsonb column. - Claudia's verification response is generic ("I don't know your approval history yet"). The context file is not in the workspace or is not referenced from
USER.md/MEMORY.md. Confirmapprovals.jsonis under~/.openclaw/workspace/, that a workspace file points at it, and that the JSON parsed (a common failure is a malformed timestamp field). Restart the gateway or start a fresh session so OpenClaw re-indexes the workspace. - The persona file is wrong about something. Edit
maya.mddirectly. Claudia re-reads it on the next interaction.
Bottom line of Decision 2. Claudia knows who Maya is, what the company looks like, and has a seeded model of Maya's past approval decisions. Layer 1 (standing instructions) will be set in Decision 4; Layer 2 (feedback) accumulates as Maya operates; Layer 3 (derived patterns) accumulates from the historical seed and forward.
Decision 3: Build the Paperclip-integration skill
In one line: write an OpenClaw skill that lets Claudia read the Paperclip approval queue, sign approval decisions with Maya's Identic AI key, and post them back to Paperclip's API.
This is the load-bearing skill: the one that turns Claudia from "a personal AI Maya can chat with" into "Maya's Identic AI configured for governance." The skill is the bridge between OpenClaw's session-and-skill architecture (Concepts 5-6) and Paperclip's approval API (Course Six). It's also the skill that carries the most of your own integration code, both the OpenClaw skill plumbing and the signing-and-verification layer that wraps Paperclip's API. Spend time getting it right; Decisions 4-7 build on top of it.
The skill's three responsibilities, separated cleanly:
- Poll for pending approvals. Either every 60 seconds (cron-style) or on demand when Maya messages Claudia. The polling cadence is a tradeoff: faster means lower latency on routine approvals; slower means lower API load on Paperclip. 60 seconds is a defensible default.
- Reason about each approval. For each pending item, Claudia uses her persona (Decision 2) plus standing instructions (Decision 4) plus accumulated patterns (over time, Concept 10) to decide: approve, request revision, or surface to Maya.
- Sign and post the decision. When Claudia decides to approve, the skill signs the decision payload with Maya's Identic AI signing key (generated in Decision 4). Where the decision goes next is the one part that diverges by track, see the split below.
Responsibility 3 posts to the mock's single resolve route. The mock at mocks/paperclip-mock.ts keeps a simplified shape: one POST .../approvals/{id}/resolve route stands in for the whole decision flow. That is fine for a stand-in; you build the real-route logic only in the full track.
Responsibility 3 posts to the real Paperclip route. Paperclip 2026.513.0 has no resolve verb: the decision verbs are approve, reject, and request-revision, each its own POST sub-route. When Claudia decides to approve, the skill calls POST /api/approvals/{approvalId}/approve (with an optional decisionNote). These routes call assertBoard(): they accept a Paperclip board API key, not an agent key. So the credential the skill posts with is the board API key Maya minted for Claudia in Decision 4. (An agent key gets a 403 "Board access required" on these routes; that is why the delegation credential is board-scoped.) The ed25519 signature is your own attestation layer, not something Paperclip checks; your verification layer (Decision 5) verifies it and checks the delegated envelope before the skill ever calls the real route.
The brief. In your agentic coding tool, switch to plan mode, paste the brief below, ask the tool to produce a written plan and save it to docs/plans/decision-3.md, review it, then switch out of plan mode to execute.
Build the load-bearing skill that lets Claudia talk to Paperclip. Installable via OpenClaw's skills system per docs.openclaw.ai/skills. Three responsibilities, separated cleanly. Requirements:
- Poll for pending approvals. Hit the pending-approvals queue every 60 seconds (configurable), or be invokable on demand when Maya messages Claudia. Full track:
GET /api/companies/{companyId}/approvals?status=pending. Simulated track: the mock's equivalent queue route.- Reason about each pending approval. For every item, ask Claudia (using OpenClaw's reasoning interface) whether to approve, request revision, or surface to Maya. Use Claudia's persona context, the standing instructions Maya has given (Layer 1), and the per-decision feedback patterns (Layer 2) from her session.
- Sign and post the decision. When Claudia decides to approve, sign the decision payload with Maya's Identic AI signing key (placeholder path for now; the real key gets generated in Decision 4). Then post: in the full track,
POST /api/approvals/{approvalId}/approvewith an optionaldecisionNote(the call is authenticated by Claudia's Paperclip board API key, minted in Decision 4; the approval routes require board access, not an agent key); in the simulated track, the mock's singlePOST .../approvals/{id}/resolvestub route. The signature is your own attestation layer; the full-track verification of it lands in Decision 5.- Skill metadata. Name the skill
paperclip-governance-delegate. The configuration file should expose these parameters:
PAPERCLIP_API_URLPAPERCLIP_COMPANY_IDpolling_interval_seconds(default60)signing_key_path(default~/.openclaw/keys/identic-ai.pem)- Installation: a locally-authored skill lives in the workspace skills directory, at
~/.openclaw/workspace/skills/paperclip-governance-delegate/SKILL.md(YAML frontmatter plus a markdown body; the folder name must match the skillname). OpenClaw discovers it on the next fresh session or afteropenclaw gateway restart. (openclaw skills installis for pulling skills from ClawHub by name, not for local skills.) Confirm withopenclaw skills list.- Dry-run mode (critical). Include a
--dry-runflag that polls and reasons against Paperclip's real approval queue but only logs what Claudia would do, without actually posting decisions. Maya uses this for the first week as a confidence-building period before going live.- Report back: the skill directory structure, the manifest path, confirmation that
openclaw skills listshows the skill, and a sample dry-run log.
What to expect. Your assistant produces:
- A skill directory
./paperclip-governance-delegate/containing the skill manifest, the polling loop, the per-approval reasoning prompt template, and the signing-and-posting logic (with a placeholder where Decision 4's real key will plug in) - A configuration file (for example,
config.yaml) with the exposed parameters - The
--dry-runmode that logs to stdout/file but doesn't post: critical for the first-week confidence period - The skill installed locally and visible in
openclaw skills list
Troubleshooting:
- Skill installs but doesn't run. OpenClaw skills require a
SKILL.mdmanifest (YAML frontmatter plus markdown body) at the skill root; check~/.openclaw/workspace/skills/paperclip-governance-delegate/SKILL.md, confirm thenamein the frontmatter matches the folder name, and restart the gateway so OpenClaw re-indexes the workspace. - Polling succeeds but returns no approvals when there should be pending ones. Verify the
PAPERCLIP_COMPANY_IDmatches the company you set up in Course Seven; verify the API URL is reachable from Claudia's machine. - Claudia's reasoning prompt produces inconsistent results across similar approvals. The reasoning template needs more structure; add explicit checks for the standing instructions before invoking the persona-driven judgment.
Bottom line of Decision 3. The skill is installed, configured, and operational in dry-run mode. Claudia can read the approval queue and produce reasoned decisions for each item; she can't yet post them because the signing key doesn't exist yet. Decision 4 makes posting real.
Decision 4: Generate Maya's Identic AI signing key and configure the delegated envelope
In one line: generate the cryptographic key pair for Claudia, mint the Paperclip board API key she will drive the approval routes with, register her as a Paperclip agent for identity and revocation, and configure Claudia's delegated envelope (the thresholds she's allowed to act on autonomously vs surface to Maya).
Two architectural primitives ship in this Decision: the cryptographic identity (Concept 8) and the delegated envelope (Concept 9). Both are set once and revisited periodically; both shape every downstream decision Claudia makes. The delegated envelope is especially sensitive: too wide and Claudia takes actions Maya would have wanted to see; too narrow and the eighth-invariant scaling property doesn't kick in.
Maya is configuring Claudia's delegated envelope for the first time. Her instinct is to start conservatively: low refund ceiling, surface nearly everything. A colleague (her co-founder) argues the opposite: start wide and let Maya's overrides narrow Claudia down over time. Predict which approach Course Eight recommends, and what the operational consequence is for the first month.
Answer: Course Eight recommends starting conservatively, but with a deliberate widening plan. Reasons: (a) overrides are expensive to Maya: each override is a context-switch and a feedback message; if Claudia's envelope is too wide, the first month is a flood of overrides that defeats the purpose. (b) Conservative starts let Maya observe Claudia's reasoning before trusting more decisions to her. (c) Widening is easier than tightening operationally: Maya adds a Layer 1 instruction "you can now auto-approve refunds up to $X" and Claudia adjusts immediately; tightening requires more nuanced standing-instruction edits. Operational consequence for the first month: surface rate around 20 to 25% (high but bounded), override rate around 3 to 5%, with both numbers dropping by month 2 as Maya widens the envelope deliberately. The trajectory in Concept 10's six-month walkthrough assumes this conservative start.
The brief. In your agentic coding tool, switch to plan mode, paste the brief below, ask the tool to produce a written plan and save it to docs/plans/decision-4.md, review it, then switch out of plan mode to execute.
Generate Claudia's cryptographic identity and configure her delegated envelope. Two architectural primitives ship in this Decision: the signing key (Concept 8) and the delegated envelope (Concept 9). Requirements:
Generate a fresh ed25519 key pair. Store the private key at
~/.openclaw/keys/identic-ai.pemwith strict permissions (chmod 600). Store the public key separately; your own verification layer (Decision 5) uses it to verify Claudia's attestations.Optional hardware-backed storage (macOS only). If Maya is on macOS and wants hardware-backed protection, use the Security framework to store the private key in the Keychain instead of on the filesystem; the path becomes a Keychain reference.
Mint the board API key, then register Claudia as an agent. Full track, two distinct credentials:
- The board API key is the delegation credential. Paperclip has a real
board_api_keystable, and the approval routes callassertBoard(), so a board-level credential is what they accept. Minting one is not a single CLI verb: it is Paperclip's CLI auth-challenge flow.POST /api/cli-auth/challengeswith{"requestedAccess": "board"}returns aboardApiToken(apcp_board_...value) plus anapprovalUrl; Maya opens that URL in the Paperclip dashboard to approve the challenge, and the token is then live. (On alocal_trusteddev deployment the token is issued immediately; on a real deployment the dashboard-approval step is the human gate.) ThatboardApiTokenis what Claudia holds and posts with. Conceptually this board key is the delegation: Maya hands her Identic AI a board-level credential, deliberately scoped down by your own delegation layer (Decision 5), not by Paperclip.- The agent registration is identity plus a revocation surface. Register Claudia as a real Paperclip agent (a row in
agents) and issue her anagent_api_keysentry. This gives her a Paperclip identity and something Maya can revoke in Decision 7. It is not what authenticates the approve call. Real constraints to respect:agents.roleis a server-validated enum with noidentic_aivalue, usegeneral;adapterTypeis enum-validated (claude_local,acpx_local,codex_local, ...), notclaude_code; and there is nopaperclipai agent createCLI verb, agent creation is API-only viaPOST /api/companies/{companyId}/agents(thepaperclipai agent local-cli <ref>verb only mints a key for an agent that already exists).- Record the delegated envelope as a
principal_permission_grantsrow whosescopejsonb holds the envelope thresholds. Paperclip stores this; it does not enforce it on the approve route (your Decision 5 layer does).Simulated track: call the mock's stub registration endpoint, which returns a principal id and stores the envelope for you. Either way, the public key is registered alongside so your verification layer can check Claudia's signatures.
Conservative initial delegated envelope. Maya can widen this later via standing instructions. Encode these thresholds:
- Refunds: auto-approve up to
$2,000; surface anything over- Hires within existing envelope: auto-approve
- Hires extending the company envelope: always surface to Maya
- Terminations: always surface to Maya
- Budget overrides: auto-approve up to 20% over monthly cap; surface anything over
- CMA migrations: surface for the first quarter; auto-approve later only for Workers under
$1,000/mobudget- Standing-policy edits: always surface to Maya
Maya confirmation step (required before submitting). Send Maya a Telegram message via Claudia: "Here's the proposed delegated envelope: [JSON summary]. Reply CONFIRM to register, or send edits." Only submit the registration on Maya's CONFIRM response.
Persist the credentials and the ids. On successful registration, store in Claudia's session: the board API key she posts approval decisions with, her registered agent id (full track) or the mock's principal id (simulated track), and the
principal_permission_grantsid for the envelope. The Decision 3 skill uses the board key when posting; Decision 7 uses the agent id andagent_api_keysid when revoking.Write the envelope to disk. Save the final envelope JSON to
~/.openclaw/governance/delegated-envelope.jsonso Maya can directly edit it later when widening or narrowing Claudia's authority.Report back: the key fingerprint (not the private key itself), the Keychain reference if used, the board API key id, the registered agent id and
agent_api_keysid, and the transcript of Maya's CONFIRM round-trip.
What to expect. Your assistant produces:
- A fresh ed25519 key pair, private key at the configured path with 600 permissions (or in the Keychain if Maya chose that option)
- Optional macOS Keychain integration if Maya is on Mac and chose hardware-backed storage
- A Telegram round-trip showing Maya the proposed envelope and getting her CONFIRM
- Full track: a minted Paperclip board API key (the delegation credential), Claudia registered as a Paperclip agent with an
agent_api_keysentry (identity plus revocation surface), and the delegated envelope stored as aprincipal_permission_grantsrow. Simulated track: the mock's stub registration. - The envelope persisted to
~/.openclaw/governance/delegated-envelope.jsonfor direct editing
Troubleshooting:
- No identity-registration endpoint to call. Full track: there is no custom Paperclip endpoint to invent here. You mint the board API key from
board_api_keys, register Claudia with Paperclip's real agent primitive (POST /api/companies/{companyId}/agents, then issue anagent_api_keysentry), and record the delegated envelope as aprincipal_permission_grantsrow. There is nopaperclipai agent createCLI verb; agent creation is API-only. Decision 5 builds your own verification layer that reads those, not a new Paperclip route. Simulated track: the mock ships a stub registration endpoint; if it returns 404, that stub is what you implement (it is part of the simulated-track work, not the full track's). - Agent registration rejected on
roleoradapterType. Both are server-validated enums.rolehas noidentic_aivalue, usegeneral.adapterTypeis one ofclaude_local,acpx_local,codex_local, and similar, notclaude_code. Claudia's "Identic AI" character lives in yourgovernance_ledger, not in Paperclip's agent enums. - Maya doesn't CONFIRM; wants to edit the envelope. Expected and fine. The conservative defaults are starting points; Maya may have specific values from her operations. Re-send the proposed envelope with her edits and re-request CONFIRM.
- Key permissions error (chmod 600 fails). Filesystem-level issue; check that the
~/.openclaw/keys/directory is owned by Maya's user.
Bottom line of Decision 4. Maya's Identic AI now holds a Paperclip board API key (the credential she drives the approval routes with) and is registered as a Paperclip agent (her identity and revocation surface), carrying a configured delegated envelope recorded at Paperclip but enforced by your own Decision 5 layer. The conservative initial envelope means the first month will feel "surface-heavy": that's the point. Widening happens deliberately via standing-instruction edits, not by initial over-permissioning.
Decision 5: Build the delegation-and-verification layer
In one line: build the layer that makes Claudia's credentials meaningful, the gates that verify her ed25519 attestation and check the delegated envelope before any decision reaches Paperclip, plus your own
governance_ledgertable. In the full track this layer lives in your integration skill and wraps Paperclip's real approval routes; in the simulated track it lives in the mock.
Decisions 1-4 set up Claudia and gave her credentials; Decision 5 makes those credentials meaningful. This is where the trust-delegation primitive (Concept 7), the signed-delegation verification (Concept 8), and the two-envelope intersection (Concept 9) all become live code. Without Decision 5, Claudia can sign all the decisions she wants: nothing verifies the signature or checks the envelope, so a signed decision is just an unchecked API call.
Before you build the verification layer, predict the answer to the single most counterintuitive fact about it. In Decision 4 you minted Claudia two Paperclip credentials: a board API key and an agent API key. One of them is the credential she actually uses to drive the real approval routes (POST /api/approvals/{id}/approve and friends). The other is just her identity and a revocation surface.
Predict: which one drives the approval routes, the board API key or the agent API key? And why would it be that one, given that Claudia is an agent, not a board member?
Answer: the board API key. This is the counterintuitive part. Paperclip's approval routes (approve, reject, request-revision) all call assertBoard(): they accept a board-level credential and nothing else. An agent API key, even Claudia's own, gets a 403 "Board access required" on these routes. So the credential that lets Claudia drive an approval decision has to be board-scoped. That is exactly why Decision 4 had Maya mint a board API key for Claudia: the board key, deliberately scoped down by your own delegation layer, is the delegation. The agent registration gives Claudia a Paperclip identity and something Maya can revoke in Decision 7, but it is not what authenticates the approve call. If you build the verification layer expecting an agent key to work on the approval routes, every one of Claudia's decisions will 403 before your gates ever run. (Simulated track: the mock's single /resolve route does not enforce this distinction; it is a full-track reality. The simulated track still teaches it so your mental model is correct if you later go full.)
A concrete anchor before the rules. The clearest single explanation of how this whole layer behaves is one routine refund walked end-to-end, through Claudia's reasoning, the three gates, the real Paperclip call, and both audit rows. Read this trace first; then the three gates and the route reference below read as a lookup, not as something to absorb cold. The trace is shown in its full-implementation-track shape: real Paperclip routes, real activity_log columns. The entries are illustrative: synthesized to show the shape of the data, not exported from a running implementation. (In the simulated track the steps are the same in spirit, but the mock's single /resolve route and simplified row shapes stand in for the real Paperclip surface.)

A routine refund moves from pending to a recorded decision in about 40 seconds, fully autonomous and fully audited, with two joinable audit rows and zero human interruption.
1. The approval arrives at Paperclip. A refund over the envelope ceiling is modeled as a request_board_approval approval, with the refund detail inside the payload jsonb (Paperclip's approval type enum is hire_agent, approve_ceo_strategy, budget_override_required, request_board_approval; there is no refund type). Note there is no top-level issueIds column on approvals: issue links live in the separate issue_approvals join table, and details.linkedIssueIds is populated by the service when the row is written.
{
"id": "apr_01HZ4Q...",
"companyId": "co_maya...",
"type": "request_board_approval",
"status": "pending",
"requestedByAgentId": "worker_tier1_support",
"payload": {
"kind": "refund",
"amountCents": 89000,
"currency": "USD",
"customerId": "C-3421",
"accountAgeDays": 1167,
"priorRefunds6mo": 0,
"reason": "billing-error-duplicate-charge"
},
"createdAt": "2026-05-12T09:14:03.117Z"
}
2. Claudia's polling skill picks it up (from Decision 3; polling cadence is about 60s):
[2026-05-12T09:14:42] claudia.skill.paperclip-governance-delegate:
poll GET /api/companies/co_maya/approvals?status=pending → found 1 pending (apr_01HZ4Q...)
routing to per-approval reasoning prompt
3. Claudia reasons (from the per-approval reasoning prompt template, Decision 3):
[2026-05-12T09:14:42] claudia.reasoning:
input: approval=apr_01HZ4Q..., payload.kind=refund, amount=$890, customer_age=3.2yr, prior_refunds=0
layer-1 check (standing instructions):
- "auto-approve refunds under $2,000 with no prior refunds and account >2yr" → MATCH
layer-3 pattern (derived):
- "Maya tends to approve fast in this band; ~91% historical match" → REINFORCES
decision: approve
confidence: 0.91
reasoning_summary: "Long-tenure customer (3.2yr), no prior refunds, amount within
delegated envelope ($2,000 ceiling), pattern reinforces standing rule."
4. Claudia's delegation layer signs and gates the decision (Decision 5's three gates, run before any call to Paperclip):
[2026-05-12T09:14:43] delegation-layer.gate-check:
signed decision payload with ed25519 (signature ed25519:k7n2m...8q3w)
gate 1: signer's credentials recognized and unrevoked (board API key bk_claudia_maya;
agent ag_claudia_maya registered) → PASS
gate 2: ed25519 signature verifies against Claudia's registered public key → PASS
gate 3: delegated envelope (principal_permission_grants.scope):
payload.kind=refund, amount=$890, ceiling=$2,000 → IN ENVELOPE → PASS
all gates passed → calling the real Paperclip route
5. The delegation layer calls the real Paperclip approve route (POST /api/approvals/{approvalId}/approve, authenticated by Claudia's board API key, because the approve route requires board access):
[2026-05-12T09:14:43] delegation-layer.post:
POST /api/approvals/apr_01HZ4Q.../approve
auth: Claudia's board API key
body: { "decisionNote": "auto-approved by Owner Identic AI; attestation ed25519:k7n2m...8q3w" }
→ 200 OK (the approval's status is now 'approved'; this is a recorded decision,
it does not move issue iss_3421 or resume a Worker)
6. Paperclip writes the activity_log row (real columns; the approve route is a board action, so the row is actor_type='user', not actor_type='agent'):
{
"id": "act_8z2k...",
"company_id": "co_maya...",
"actor_type": "user",
"actor_id": "local-board",
"agent_id": null,
"action": "approval.approved",
"entity_type": "approval",
"entity_id": "apr_01HZ4Q...",
"details": {
"type": "request_board_approval",
"linkedIssueIds": ["iss_3421"],
"decisionNote": "auto-approved by Owner Identic AI; attestation ed25519:k7n2m...8q3w"
},
"created_at": "2026-05-12T09:14:43.298Z",
"run_id": null
}
This row looks exactly like one Maya would write resolving the approval herself. Paperclip cannot tell them apart; the next row, in your own governance_ledger, is what carries the distinction.
7. Your governance_ledger writes Claudia's parallel row (your own additive table, schema in docs/governance-ledger-schema.sql):
{
"ledger_id": "gov_2k8z...",
"approval_id": "apr_01HZ4Q...",
"principal": "owner_identic_ai",
"acting_on_behalf_of": "owner_human_maya",
"signer_agent_id": "ag_claudia_maya",
"action_taken": "approve",
"confidence": 0.91,
"layer_source": "standing_instruction",
"layer_reference": "si_refund_under_2000_long_tenure_no_priors",
"reasoning_summary": "Long-tenure customer (3.2yr), no prior refunds, amount within delegated envelope, pattern reinforces standing rule.",
"attestation": "ed25519:k7n2m...8q3w",
"override_status": null,
"timestamp": "2026-05-12T09:14:43.298Z"
}
Total elapsed: about 40 seconds from pending approval to recorded decision (mostly Claudia's reasoning latency plus signature crypto). Maya was not interrupted. The two rows, Paperclip's activity_log row (a board action, actor_type='user') plus your governance_ledger row with principal='owner_identic_ai', make the decision recoverable later: Maya can SQL-query both tables (joined on the approval id) to reconstruct exactly what Claudia did and why. Paperclip's row alone cannot tell her whether she or Claudia resolved it; the governance_ledger row is what answers that. Note what step 5 did not do: it recorded a decision; it did not move issue iss_3421 or resume a Worker. Driving that issue forward is a separate explicit step (PATCH /api/issues/iss_3421 to set its status, or assigning a Worker to pick it up), exactly as Course Six and Seven taught.
That is the whole layer in one trace. Steps 4, 5, and 7 are exactly what Decision 5 builds: the three gates, the real Paperclip call, and the governance_ledger write. The rest of Decision 5 is the rules behind that trace: first the three gates in detail, then the route reference as a lookup table you scan when you need it, not something to read linearly.
The two tracks diverge sharply here. Read the block for your track.
You implement the gating and the identity endpoints in the local mock at mocks/paperclip-mock.ts, where the signed-delegation gating and the stub registration endpoint are already marked as stubs for you to fill. The mock keeps a simplified shape: a single POST .../approvals/{id}/resolve route stands in for the whole decision flow, and the mock itself runs the three gates before resolving. This is the working content the simulated track was built and tested against; build it exactly as the mock's stubs are laid out.
The brief below, in its simulated form: modify the mock's /resolve stub to accept a signature and signer_principal_id, run the three gates (signer registered, signature verifies, action within the stored envelope), and on success write both a mock activity_log entry and a governance_ledger.json row. Implement the mock's stub registration and revocation endpoints. Then jump to the shared "what to expect" notes below.
The worked trace above showed the layer running. This block is the rules behind it, in three parts: (A) what you build, (B) the three gates in detail, (C) the route reference as a lookup table. Read A and B; skim C and return to it when you wire the actual calls.
Part A: what you build (and what you don't). There are no new Paperclip endpoints to build. Paperclip 2026.513.0 already owns the approval routes, the board_api_keys table, the agents registry, agent_api_keys, principal_permission_grants, and activity_log. What you build in Decision 5 is your own delegation layer inside the paperclip-governance-delegate skill, plus your own additive governance_ledger table. Paperclip never verifies a signature and never enforces the delegated envelope on the approve route; your layer does both, and only then calls the real Paperclip route with Claudia's board API key.
Part B: the three gates, in order, for each decision Claudia reaches. This is steps 4 and 5 of the worked trace, in detail.
- Gate 1, signer is a recognized principal. Confirm Claudia's credentials are still good: her board API key (the one she posts with) is present and not revoked, and her registered
agentsrow plusagent_api_keysentry still exist (Decision 7 revokes here). A revoked board key gets a 403 on the approval routes; a revoked agent key fails the agent-only routes. This gate fails fast and is auditable in yourgovernance_ledger. - Gate 2, signature verification. Verify Claudia's ed25519 signature over the canonical-JSON decision payload against her registered public key. This is your own attestation layer; it is the thing that proves the decision came from Claudia's key and was not tampered with. Paperclip has no signature field, so this gate exists only in your layer.
- Gate 3, delegated envelope. Read the delegated envelope from Claudia's
principal_permission_grants.scopejsonb and test that the claimed action (type, amount) is inside it. Paperclip stores this row but does not enforce it on the approve route, so this gate is your layer's discipline, not a Paperclip-enforced boundary. The enforced authority is the intersection of the owner's envelope and this delegated subset (Concept 9).
Then: on all three gates passing, call the real Paperclip route (Part C), authenticated by Claudia's board API key, with an optional decisionNote (you may echo the ed25519 signature there for audit). And write a governance_ledger row (your own table, schema in docs/governance-ledger-schema.sql), joinable to Paperclip's activity_log by the approval id. That row carries the owner-human vs owner-identic-ai distinction; Paperclip's activity_log does not (see the honesty correction below).
Part C: the real Paperclip routes your layer calls (verified against 2026.513.0; a lookup table, scan it when you wire the calls, you do not need to memorize it):
POST /api/approvals/{approvalId}/approve(200 for a board caller), body{decisionNote?}. The body may also carry adecidedByUserId, but Paperclip ignores it: the route attributes the decision to the authenticated actor, not to that field. Do not treat it as a meaningful input.POST /api/approvals/{approvalId}/reject(200),POST /api/approvals/{approvalId}/request-revision(200). All three callassertBoard(): an agent-API-key caller gets a 403 "Board access required".GET /api/companies/{companyId}/approvals?status=pending(200) for the poll,GET /api/approvals/{approvalId}(200) to read oneGET /api/companies/{companyId}/activity(200) to read back the audit trail
Note the action routes are flat (/api/approvals/{id}/approve), not company-scoped. There is no resolve verb and no PATCH /api/approvals/{id}: the decision verbs are approve, reject, and request-revision, each a POST sub-route.
Be honest about what Paperclip's activity_log records for approvals. This is the load-bearing correction. The approval routes are board actions: every activity_log row they write is actor_type='user', actor_id='local-board' (the board user id), agent_id=null. An approval Claudia resolves and one Maya resolves herself both land as actor_type='user'; Paperclip cannot tell them apart, because to Paperclip they are both board callers. actor_type='agent' is real in activity_log, but only for issue, heartbeat, and run activity, never for an approval decision. So the two-principal distinction is not something you inherit from Paperclip for approvals. It lives in your governance_ledger: the row Claudia writes carries principal='owner_identic_ai' with her attestation and reasoning, and that is what tells a Claudia-resolved approval apart from a Maya-resolved one. This is exactly why the course builds the governance_ledger: it is the audit truth Paperclip does not natively provide for approvals. The architecture is working as designed, not falling short.
Approving is a decision record, not a state machine. This is the single most important correction to the earlier draft. Approving an approval in Paperclip records the decision. It does not change the linked issue's status, does not execute the payload action, and does not resume any Worker. There is no step.wait_for_event Worker-resume wired to Paperclip approvals (that primitive belongs to Inngest, Course Five). Continuing the work after an approval is a separate, explicit step: PATCH /api/issues/{id} to move the issue's status, or assign a Worker so its next heartbeat picks the issue up, or enqueue an agent_wakeup_request. Your delegation layer's job ends at "the decision is recorded and the governance_ledger row is written"; driving the follow-on work is its own step, exactly as Course Six and Seven taught.
The brief. In your agentic coding tool, switch to plan mode, paste the brief below, ask the tool to produce a written plan and save it to docs/plans/decision-5.md, review it, then switch out of plan mode to execute.
Build the delegation-and-verification layer. Full track: this layer lives in the
paperclip-governance-delegateskill and wraps Paperclip's real approval routes; you add no Paperclip endpoints. Simulated track: implement the equivalent gating and stub endpoints inmocks/paperclip-mock.ts. Requirements:
- Three gates, run before any decision is posted. For each decision Claudia reaches:
- Gate 1, signer is a recognized principal. Full track: confirm Claudia's board API key (the one she posts with) is present and not revoked, and her
agentsrow plusagent_api_keysentry still exist. Simulated track: look upsigner_principal_idin the mock's registration store. If not recognized or revoked, refuse and write agovernance_ledgerrow recording the refused attempt (forgery and stale-key attempts must be auditable).- Gate 2, signature verification. Verify Claudia's ed25519 signature over the canonical-JSON decision payload against her registered public key. If invalid, refuse and record it. Paperclip has no signature field; this gate lives only in your layer.
- Gate 3, delegated envelope. Full track: read the envelope from Claudia's
principal_permission_grants.scopejsonb. Simulated track: read the envelope the mock stored at registration. Test the claimed action (type, amount) against it; if outside, refuse and record it. Paperclip does not enforce this on the approve route; your layer does.- On all three gates passing, post the decision. Full track: call the real Paperclip route,
POST /api/approvals/{approvalId}/approve(or/reject, or/request-revision), authenticated by Claudia's board API key (the approval routes require board access), with an optionaldecisionNote(you may echo the signature there for audit). Simulated track: resolve through the mock's single/resolvestub route.- Be honest about Paperclip's attribution; do not fake it. Do not invent an
actorfield on Paperclip's row. In the full track, the approve route is a board action: Paperclip writes theactivity_logrow itself withactor_type='user',actor_id='local-board'(the board user id),agent_id=null,entity_type='approval',entity_id=<approval id>,action='approval.approved'. An approval Claudia resolves and one Maya resolves herself look identical inactivity_log. The owner-human vs owner-identic-ai distinction is carried by yourgovernance_ledgerrow, not by Paperclip. In the simulated track, the mock implements native principal attribution for teaching clarity; do not expect the real Paperclip to.- Do NOT resume a Worker. Approving is a decision record. It does not change the linked issue or resume anything. Continuing the work is a separate explicit step (
PATCH /api/issues/{id}to set status, or assign a Worker, or enqueue anagent_wakeup_request); that step is out of scope for this layer.- Create the
governance_ledgertable with the schema indocs/governance-ledger-schema.sql. This is your own additive table; it does not touch Paperclip's schema. Write one row per decision Claudia makes (posted or refused), joinable toactivity_logby the approval id.- Revocation, for Decision 7. Full track: revoking Claudia's credentials is a board-credentialed operation.
DELETE /api/agents/{id}/keys/{keyId}is the real route for revoking anagent_api_keysentry (it setsrevoked_at); revoking the board API key Claudia holds is the parallel move. Both require the owner-human's board credentials; there is no endpoint for you to build. Simulated track: implement the mock's stub revocation endpoint, and make it require the mock's owner-human auth, never an Identic-AI signature.- Library suggestions: for ed25519 in TypeScript use
@noble/ed25519; in Node, the built-incryptomodule; canonical JSON for signing should sort keys and strip whitespace so signer and verifier byte-match.- Report back: the gate implementation, a successful test of each gate (including the refusal paths), the real Paperclip route call (full track) or the mock resolve (simulated track), and a sample
governance_ledgerrow.
What to expect. Your assistant produces:
- A three-gate verification layer: signer-is-registered-agent, signature verifies, action within the delegated envelope
- Signature verification using a standard ed25519 library (in TypeScript,
@noble/ed25519is the typical choice; in Node, the built-incryptomodule works) - Envelope intersection logic that reads the delegated envelope (
principal_permission_grants.scopein the full track, the mock's store in the simulated track) and tests the claimed action against it - Full track: real calls to
POST /api/approvals/{id}/approveand friends with Claudia's board API key, with Paperclip writingactivity_logrows that carryactor_type='user'(the approve route is a board action). Simulated track: the mock's/resolveroute doing the equivalent, with the mock's native principal attribution. - Your own
governance_ledgertable created and a row written for every decision Claudia makes: this is where the owner-human vs owner-identic-ai distinction lives
Troubleshooting:
- Signature verification works in tests but fails against the real key. Most common: payload encoding mismatch. Verify both signer and verifier use the same canonical JSON encoding (sorted keys, no extra whitespace) before signing/verifying.
- Envelope check passes for actions that should be outside envelope. The envelope in
scope(or the mock's store) is more permissive than intended. Test edge cases explicitly: a refund of $2,000.01 against an envelope of $2,000 should fail; a refund of exactly $2,000 depends on inclusive vs exclusive semantics (pick one, document it). - Approving did not move the issue or resume the Worker. That is correct behavior, not a bug. A Paperclip approval is a decision record. Continuing the work is a separate explicit step; wire that step on its own, after the decision is recorded.
governance_ledgerrows are written but Maya's weekly summary (Concept 11) doesn't include them. Different concern; that's a Decision 6+ wiring issue, not Decision 5. Verify the ledger has rows first.
Bottom line of Decision 5. Your delegation layer verifies Claudia's attestation, enforces the delegated envelope, and only then drives the real Paperclip approval routes with her board API key (full track) or the mock (simulated track). Against real Paperclip the approve route is a board action, so activity_log records actor_type='user' either way; the owner-human vs owner-identic-ai distinction lives in your own governance_ledger, which is exactly the audit truth this course builds it to provide. Everything Claudia does from here flows through this verification path.
Decision 6: End-to-end demonstration with a flood of routine approvals plus one consequential
In one line: generate a realistic week's worth of approval requests against the Course Seven workforce, most routine, one consequential, and watch Claudia handle the routines autonomously while correctly surfacing the consequential one to Maya via Telegram.
This is the moment the eighth-invariant promise becomes operationally observable. Decisions 1-5 are setup; Decision 6 is the proof. If the demonstration doesn't show the scaling property (Claudia handles routine fast, surfaces consequential to Maya, a governance_ledger row for every decision she makes), something earlier is wrong. Treat the demonstration as both a verification of the prior Decisions and an honest stress-test of the architecture.
You're about to inject 30 synthetic approval requests into Paperclip and watch Claudia handle them. The mix: 22 refunds in the $500 to $1,500 range, 4 refunds in the $1,500 to $2,000 range, 2 budget overrides at 10 to 15% over cap, 1 new-language Tier-2 hire (no envelope extension), 1 envelope-extension hire.
One standing instruction you need for this prediction, restated here so you do not have to hunt for it: Maya has a standing instruction that the first hire in a new language is a strategic moment and must be surfaced to her, even when the hire needs no envelope extension. (This is the instruction the Concept 1 PRIMM established.)
Two predictions:
- How many of the 30 should Claudia handle autonomously (auto-approve) vs surface to Maya, under the conservative envelope from Decision 4?
- Roughly how long should the autonomous-decision processing take, end-to-end, for all of those (assuming a 60-second poll cycle)?
Answers:
- 28 autonomous, 2 surfaced. The 22 refunds in $500 to $1,500 are all in the delegated envelope, so all autonomous (22). The 4 refunds in $1,500 to $2,000 are still under the $2,000 ceiling, so autonomous (4 more, 26 total). The 2 budget overrides at 10 to 15% are under the 20% ceiling, so autonomous (2 more, 28 total). The new-language hire is outside the delegated envelope per Maya's standing instruction from the Concept 1 PRIMM Predict (a Spanish-language first hire is a strategic moment), so surfaced. The envelope-extension hire is outside the delegated envelope by definition, so surfaced. 2 surfaced, 28 autonomous.
- 2 to 3 minutes total. With a 60-second poll cycle, Claudia picks up the 30 requests on her first poll cycle after injection. Reasoning about each takes roughly 3 to 5 seconds for routine cases (the prompt template is short, the decisions are clear). All 28 autonomous decisions resolve within roughly 2 minutes. The 2 surfaced go to Maya's Telegram and wait for her response, which doesn't count against Claudia's processing time.
If your prediction is far off, something in your understanding of the envelope or the polling architecture is wrong. Use the demonstration to recalibrate.
The brief. In your agentic coding tool, switch to plan mode, paste the brief below, ask the tool to produce a written plan and save it to docs/plans/decision-6.md, review it, then switch out of plan mode to execute.
Demonstrate the full pipeline end-to-end. Generate a synthetic workload of 30 approval requests with realistic distribution, post them to Paperclip, and verify Claudia handles routine items autonomously while surfacing consequential ones. Requirements:
- Generate 30 synthetic approval requests with this distribution:
- 22 refunds in the
$500-$1,500range (Claudia should auto-approve)- 4 refunds in the
$1,500-$2,000range (Claudia auto-approves but flags in weekly summary)- 2 budget overrides at 10-15% over cap (Claudia auto-approves)
- 1 hire proposal for a Spanish-Language Tier-2 Specialist with no envelope extension (Claudia surfaces to Maya per the standing instruction from Concept 1's PRIMM Predict: "first hire in a new language is a strategic moment")
- 1 envelope-extension hire proposal (Claudia surfaces; outside delegated envelope by definition)
- Inject all 30 into Paperclip's approval queue (full track: create them via
POST /api/companies/{companyId}/approvals; simulated track: the mock's queue-injection helper). Trigger Claudia's polling skill (or wait for the next 60-second poll cycle).- Capture precise timing. We'll compare to the prediction in the PRIMM Predict above.
- Verify against the audit trail:
- 28 approvals should now have
status: approved. Full track: each has a matching Paperclipactivity_logrow, and because the approve route is a board action that row carriesactor_type='user',actor_id='local-board',agent_id=null. The fact that Claudia (not Maya) drove it is recorded in yourgovernance_ledger, not inactivity_log. Simulated track: the mock's equivalent rows.- Each of the 28 also has a
governance_ledgerrow withprincipal='owner_identic_ai': that is the row that attributes the decision to Claudia- 2 approvals should still be
status: pending, waiting for Maya- Verify Maya received Telegram messages. She should have 2 messages from Claudia summarizing the 2 surfaced approvals, each with Claudia's reasoning for why she surfaced rather than auto-approved.
- Verify timing matches prediction (within an order of magnitude; actual numbers depend on model latency).
- Verify the governance_ledger. 28 rows should be written with the schema fields from Concept 11:
action_taken,confidence,layer_source,reasoning_summary,attestation.- Maya-response round-trip. Have Maya resolve one of the 2 surfaced approvals herself (approve the Spanish-language hire) as the owner-human: full track, through Paperclip's dashboard; simulated track, the mock's human-resolve path. Verify the decision is recorded and Paperclip writes a new
activity_logrow withactor_type='user', joinable to the surfaced approval byentity_id. (Recording the decision does not itself move the linked issue; that follow-on step is separate.)- Maya exercises an override (the Concept 12 mechanic, mandatory). Pick one of the 28 approvals Claudia auto-approved, a refund near the top of the band is a good choice, and have Maya reverse it: she decides, on review, that this one should not have been auto-approved. Record the override by writing the
governance_ledger.override_statusfield tooverridden_by_owneron that approval's row, with anoverride_reason(for example, "this customer has a pattern I want to see myself"). This is the real Concept 12 recalibration mechanic: the override is training data, not just a correction. Verify the row now showsoverride_status='overridden_by_owner'and the reason is captured.- Run the audit-trail JOIN yourself (the spine made observable). Run the
/verify-audit-trailslash command from the starter zip (it performs theactivity_logJOINgovernance_ledgeron the approval id). Observe directly: a Claudia-resolved approval and the Maya-resolved Spanish-language hire are indistinguishable inactivity_logalone, bothactor_type='user'; only thegovernance_ledgerrow, present for Claudia's, absent for Maya's, reveals which was the Identic AI's. This is the two-principal distinction, observed rather than read about.- Have Claudia generate the weekly governance summary (the Concept 11 digest, built not just read). Ask Claudia, through Telegram, to produce the weekly governance digest from the
governance_ledgerrows this demonstration just wrote: totals, breakdown by category, the one override Maya just made, and any confidence flags. Compare its shape to the sample summary in Concept 11. This is the digest Maya consumes weekly; here you generate it once from real ledger rows so the form factor is concrete, not just described.- Report back: the timing breakdown, the audit-row counts (activity_log + governance_ledger), the two Telegram messages Maya received, the Maya-response flow, the override row, the JOIN-query output, and the generated weekly summary.
What to expect. Your assistant generates the workload, runs the demonstration, and reports back:
- 28 autonomous approvals resolved within roughly 2 to 3 minutes
- 2 surfaced approvals waiting for Maya in Telegram with Claudia's reasoning attached
- Activity_log showing the board-action rows; governance_ledger showing 28 rows with reasoning summaries and confidence scores
- Maya's response to one surfaced approval flows through to resolution
- One auto-approval overridden by Maya, with
override_statusandoverride_reasonwritten to thegovernance_ledgerrow - The
/verify-audit-trailJOIN output, showing Claudia-resolved and Maya-resolved approvals are identical inactivity_logand distinguished only by thegovernance_ledger - A Claudia-generated weekly governance summary built from the demonstration's real ledger rows
Troubleshooting:
- Demonstration runs but no Telegram messages reach Maya. The chat-app integration from Decision 1 isn't wired into the skill's surfacing logic. Check the skill's notification handler.
- Claudia approves something she should have surfaced. The envelope check in Decision 5 is too permissive. Walk the specific approval through Concept 9's intersection logic by hand to identify which check missed.
- Timing is far slower than predicted (10x or more). Model latency is the usual culprit. Check the prompt template length; if it includes verbose context for each decision, latency multiplies. Trim the prompt to the essentials.
The worked trace lives in Decision 5. The clearest single explanation of what Decision 6 produces is the seven-step worked approval thread at the start of Decision 5: one routine refund walked end-to-end, through Claudia's reasoning, the three gates, the real Paperclip call, and both audit rows. Decision 6 is that same trace, run 28 times in parallel with 2 surfaced. If you skipped it, read it now: it is the concrete picture of a single autonomous approval, and the demonstration here is just that picture at volume.
The 2 surfaced approvals look different from the 28 autonomous ones. For a surfaced approval, Claudia writes a governance_ledger row with action_taken: "surface_to_owner" and sends a Telegram message to Maya rather than calling Paperclip's approve route. When Maya later resolves the surfaced approval herself (full track: Paperclip's dashboard; simulated track: the mock's human-resolve path), her action writes a Paperclip activity_log row with actor_type='user' and actor_id set to her board user id. That row is shaped the same as the one Claudia's delegation layer produced for the 28 autonomous approvals: both are board actions. The two-principal distinction is observable by joining activity_log to governance_ledger on the approval id: an approval Claudia resolved has a governance_ledger row with principal='owner_identic_ai'; one Maya resolved directly has no such row. The distinction lives in your ledger, not in activity_log.
Bottom line of Decision 6. The end-to-end pipeline works, and you have now done the spine, not just read it. The delegate primitive's promise (Claudia handles routine fast, surfaces consequential, every decision she makes recorded in the governance_ledger) is operationally observable: you ran the activity_log JOIN governance_ledger query yourself and saw that the two-principal distinction lives in your ledger, not in Paperclip's audit trail. You exercised the Concept 12 override mechanic by reversing one of Claudia's auto-approvals into the override_status field. And you had Claudia generate the Concept 11 weekly digest from real ledger rows. This is the moment the course's central architectural claim is verified in code, and the reason the course builds the governance_ledger at all.
Decision 7: Stolen-laptop recovery and device-switch continuity
In one line: simulate two operational scenarios, Maya's laptop is stolen (revoke Claudia's Paperclip credentials from a different device) and Maya gets a new Mac (migrate her OpenClaw session to it), and verify both flows work correctly.
Concept 8 named the stolen-laptop case as the load-bearing failure mode. An architecture that has no recovery story for this is fragile. Decision 7 verifies the recovery story works, not by writing it from scratch, but by exercising the revocation flow in two realistic scenarios. A successful Decision 7 is the proof that Maya's Identic AI is recoverable, not just deployable.
The mechanism that makes revocation work: Claudia holds two Paperclip credentials, a board API key (what she drives the approval routes with) and an agent_api_keys entry (her agent identity). Revoking Claudia means revoking both: the board API key, and the agent key via DELETE /api/agents/{id}/keys/{keyId} (which sets revoked_at). Once the board key is revoked, the "stolen" OpenClaw session gets a 403 on the approval routes; once the agent key is revoked, it fails the agent-only routes too. In the full track, both revocations are Paperclip-side actions taken with the owner-human's board credentials; there is no Identic-AI signature path to either. In the simulated track, the mock's stub revocation endpoint does the equivalent.
A note on testing a revoked key in dev. In local_trusted dev mode, a revoked or invalid bearer token on a board route is silently ignored and the request falls back to the local-board actor, so "post to the approve route with the revoked key" does not cleanly demonstrate revocation. Test against an agent-only route instead (for example GET /api/agents/me, which returns 401 for a revoked agent key), or check the revoked_at column directly. The brief below uses the agent-only route.
The two scenarios cover different failure modes:
| Scenario | Failure mode | What we're testing |
|---|---|---|
| Stolen laptop | Adversarial: someone has Maya's signing key + both API keys | Revocation works; new agent + board-key registration works; old keys dead |
| Planned device switch | Operational: Maya wants to move to a new machine | Session migration works; new device's Claudia is consistent with old |
The brief. In your agentic coding tool, switch to plan mode, paste the brief below, ask the tool to produce a written plan and save it to docs/plans/decision-7.md, review it, then switch out of plan mode to execute.
Two scenarios to test in sequence: stolen-laptop revocation, then planned device switch. Requirements:
Scenario 1, stolen-laptop revocation:
- Simulate the loss. Treat Maya's primary Mac as compromised: the attacker has Claudia's ed25519 signing key and both her Paperclip credentials (the board API key and the agent API key).
- Revoke Claudia's Paperclip credentials. From a different device (Maya logging in to Paperclip on a second device with her board credentials), revoke both: Claudia's board API key, and her
agent_api_keysentry viaDELETE /api/agents/{id}/keys/{keyId}(full track; simulated track: call the mock's stub revocation endpoint). This must be done with Maya's owner-human board credentials, never an Identic-AI signature: a compromised Identic AI must not be able to revoke or re-register itself.- Verify the human-auth requirement. Attempt the revocation with an Identic-AI-signed request instead of board credentials and confirm it is rejected. (This verifies the safety property: a compromised Identic AI cannot revoke itself.)
- Verify the old keys are dead. From the "stolen" OpenClaw session, call an agent-only route,
GET /api/agents/me, with Claudia's now-revoked agent API key; Paperclip must return 401. (Do not test this with the approve route: inlocal_trusteddev mode a revoked bearer on a board route is silently ignored and falls back tolocal-board, so the approve route would not show the revocation. The agent-only route is the clean test. Checkingrevoked_atdirectly in the DB also works.)- Re-register on a fresh device. On a different machine, generate a new ed25519 key pair, mint a fresh board API key, and register Claudia as a Paperclip agent again with a fresh
agent_api_keysentry, all through Maya's board-credentialed session, same persona, same delegated envelope (principal_permission_grants.scope) as before.- Verify Claudia is fully operational on the new device.
Scenario 2, planned device switch:
- STOP THE DAEMON FIRST (critical step). On the old device, run
openclaw gateway stopand confirm withopenclaw statusthat the process is no longer running. Copying a live session directory while the daemon is running can corrupt the embedded SQLite database or cause the new device to silently drop recent decisions Claudia made. The clean stop is non-negotiable.- Copy the session. Move
~/.openclaw/from the old device to the new one, except for the signing keys directory, which gets regenerated.- Generate a new key pair on the new device. Mint a fresh board API key and register Claudia as a Paperclip agent with a fresh agent API key, all through Maya's board-credentialed session, as a replacement for the old credentials (the Claudia identity stays continuous in your governance ledger; only the credentials change). Then revoke the old board API key and the old
agent_api_keysentry.- Start the daemon on the new device. Run
openclaw gateway startand verify withopenclaw status.- Verify session continuity. Ask Claudia on the new device "what was the last approval you handled?"; she should reference the actual last approval from the old device's session. Persona, standing instructions, learned patterns: all present.
- Run a verification approval. Send a fresh approval request; confirm Claudia handles it correctly with audit entries consistent with what the old device would have produced.
Documentation for each scenario: what was preserved across the transition, what was not preserved and why, and what Maya had to do manually vs what was automatic.
What to expect. Your assistant runs both scenarios and reports:
- Stolen-laptop scenario: revocation succeeds with the owner-human's board credentials, fails (correctly) with an Identic-AI-signed request. The "stolen" session's revoked agent key returns 401 on
GET /api/agents/me, and its board key returns 403 on the approval routes. A new board API key plus agent registration on a fresh device, done through Maya's board-credentialed session, is fully operational. Claudia's persona, standing instructions, and accumulated patterns survived (because they live in the session on Maya's filesystem, which the stolen laptop took, so practically Maya restores her session from her backup; the architectural test verifies the patterns themselves are recoverable from that backup). - Device-switch scenario: session migrates intact via filesystem copy. New device's Claudia has the same persona, history, and patterns. The fresh board API key plus agent re-registration are the only credential-level change. Governance ledger remains intact and continuous.
Troubleshooting:
- The revocation path accepts an Identic-AI-signed request. The auth requirement is wrong; revocation must require the owner-human's board credentials. Fix it (full track: the revocation is a board-credentialed Paperclip action by construction, so check you are not routing it through Claudia's agent auth; simulated track: fix the mock's stub revocation endpoint). A compromised Identic AI must not be able to revoke itself.
- New device's Claudia doesn't have the old session. Session migration via filesystem copy is the simplest case; if it fails, check that the entire
~/.openclaw/was copied (skills, persona, history) and not just the configuration files. - Governance ledger has a gap during the transition. Expected if the transition takes meaningful time; document the gap and confirm Maya didn't lose any in-flight approvals during the switch.
Bottom line of Decision 7. Both failure modes are covered. Maya can recover from a stolen laptop; Maya can migrate to a new device cleanly. The architecture's recovery story is verified, not just declared. This concludes the lab.
Part 5: Operational realism: what the Identic AI learns and how the audit works
The lab in Part 4 produces a working Owner Identic AI. Part 5 is about what happens over time: what Claudia learns over six months of Maya's decisions, how Maya audits Claudia's behavior through a parallel ledger, and what to do when Claudia and Maya disagree. Three Concepts, the same shape as Course Seven's Part 5.
Concept 10: What the Identic AI learns over six months
In the demonstration in Decision 6, Claudia was already configured with 200 imported historical approval decisions (Decision 2) and a standing-instructions document (Decision 4). That is the starting state. Over six months of operation, Claudia accumulates much more, and Concept 10 walks through what kind of accumulation actually helps her be Maya's Identic AI rather than just a chat log of Maya's life.
Three layers of accumulated context.
Layer 1: explicit standing instructions. Maya tells Claudia things in Telegram. The instructions are first-class: Claudia records them, indexes them, applies them reliably. They are the rules Maya writes consciously. Concrete examples Maya might have given by month three:
- "Always surface envelope-extension hires to me, no exceptions."
- "Refunds under $300 with customer-account-age over two years: auto-approve."
- "Refunds for customers with more than three prior refunds in the last six months: always surface to me regardless of amount."
- "Budget overrides on Workers in their first 30 days of operation: always surface to me."
- "Never auto-approve anything that touches the Legal Specialist's authority envelope."
These read like company policy because that is what they are: Maya's policy, expressed in plain language, accumulated incrementally as she discovers what rules she actually wants.
Layer 2: per-decision feedback. Every time Maya overrides Claudia, Maya gives feedback. These are corrections to the patterns, layered on top of the explicit rules. Concrete examples:
- (Claudia auto-approved a $1,400 refund. Maya overrides.) "You should have surfaced this. This customer's been with us 6 weeks, that's not the long-tenure pattern I trust."
- (Claudia surfaced a routine Tier-1 hire to Maya. Maya approves and adds:) "You can approve these without me. Tier-1 hires under $300/month, established envelope, this is exactly the auto-approve case."
- (Claudia approved a budget override of 18% on a 4-month-old Worker. Maya doesn't override but comments:) "Fine this time, but going forward note that this Worker has had two overrides in a row; the third should come to me."
These feedbacks join Claudia's session and refine future behavior. Critically, they are not just corrections; they include Maya's reasoning. The reasoning is what lets Claudia apply the lesson to similar-but-not-identical cases later.
Layer 3: derived patterns. From watching Maya's decisions over time, Claudia builds models that Maya never stated as rules. Concrete examples that might emerge by month five:
- "Maya approves refunds faster on Mondays than Fridays. Possible inference: Friday refunds get scrutinized more, perhaps because of weekend customer-service load."
- "Maya tends to surface budget-override requests if the Worker's recent activity log shows a single large-cost incident, even within the auto-approve envelope."
- "Maya has approved every hire from a Tier-1 product line in the last 40 days; she has surfaced every hire from the new EU product line. The pattern suggests the EU line is in a different trust phase."
Each derived pattern comes with a confidence level. Some are strong after 50 decisions; some are wrong after 500. The Identic AI's honesty about which layer a decision comes from is what makes Maya able to recalibrate when she should.
A six-month walkthrough. Here is what Maya might experience across her first six months with Claudia, in rough order:
| Month | What's happening | What Claudia is doing |
|---|---|---|
| 0 (Decisions 1-4) | OpenClaw installed; 200 historical approvals imported; 8 standing instructions configured | Pattern-matches imported history; applies standing instructions reliably; surfaces ~20% of decisions because patterns are still thin |
| 1 | Maya overrides Claudia ~12 times; gives feedback each time | Layer 2 feedback joins session; behavior on similar cases improves visibly within the month |
| 2 | Maya adds 3 more standing instructions in response to recurring surface patterns | Surfacing rate drops to ~12%; auto-approval reliability holds |
| 3 | First emerging Layer 3 pattern: Claudia notes Maya's tendency to surface budget overrides on first-month Workers; starts surfacing them proactively | Maya confirms the pattern is right; it joins explicit rules in Layer 1 the next time Maya edits |
| 4-5 | Surfacing rate stabilizes around 8-10%; override rate drops to ~3%; weekly governance-ledger reviews take ~10 minutes | Steady state; Layer 3 patterns are accumulating but slowly |
| 6 | A novel case arrives: Manager-Agent proposes hiring a Worker for a regulatory-compliance task no Worker has held before | Claudia recognizes "no pattern, surface" and routes to Maya; Maya makes a fresh judgment, comments her reasoning, joins as a new pattern |
Notice the trajectory: the early months are heavy in feedback (Layer 2) and explicit rules (Layer 1), because Maya is actively teaching Claudia. By month 4-5, the rate of corrections drops sharply, not because Claudia has memorized every case, but because Maya's style of decision-making is now reasonably captured. By month 6, Maya's attention is spent on novel cases, recalibrations, and weekly summaries. The eighth-invariant promise, that Maya's attention is freed for what genuinely needs her judgment, is operationally observable in this trajectory.
The boundary between teachable and not. Not everything in Maya's session is a teachable pattern. The lower layers (explicit instructions, per-decision feedback) are reliably applied; the third layer (derived patterns) is probabilistic and varies in quality. A good Identic AI tells the difference and is honest about which layer it is drawing from. Claudia might say in her weekly governance summary: "This week I auto-approved 47 refunds based on your explicit instruction; I deferred 3 to you based on patterns I've learned but I'm only moderately confident about." That honesty is what makes Maya able to recalibrate when she should (Concept 12).
Paste this into your AI coding assistant: "Concept 10 describes three layers of context an Owner Identic AI accumulates: explicit standing instructions, per-decision feedback, and derived patterns. Given the three layers, design a one-paragraph instruction Claudia could include in her weekly governance summary that makes it clear to Maya which decisions came from which layer, so Maya knows where to focus her recalibration attention. The instruction should be concrete enough that Maya can act on it; vague enough that it doesn't require Claudia to re-explain her reasoning in detail every week. Show me three different drafts of this instruction, varying how prominent the layer-attribution is."
What you're learning: the layer-attribution problem is central to honest Identic AI design. If Claudia hides which layer she is drawing from, Maya can't recalibrate well. If Claudia explains too much, the weekly summary becomes unreadable. The right balance is a design choice; the exercise has you make that choice and see the tradeoff.
Bottom line: over six months, Claudia accumulates three layers of context: explicit standing instructions Maya writes (Layer 1, reliable), per-decision feedback Maya gives when Claudia gets something wrong (Layer 2, reliable, includes Maya's reasoning), and derived patterns Claudia learns from watching Maya decide (Layer 3, probabilistic, varies in confidence). The trajectory across months is from heavy active-teaching (months 1-3) to steady-state with occasional novel-case recalibration (months 4-6+). A good Identic AI is honest about which layer it is drawing from in any given decision; that honesty is what makes Maya's recalibration loop work.
Concept 11: The governance ledger: the Identic AI's audit stream
In Course Seven, the talent ledger was the company's audit stream: every hire, eval, retirement, and rehire event across the workforce's history, queryable with SQL. Course Eight adds a parallel audit stream: the governance ledger, which records every decision Claudia made on Maya's behalf.
Two parallel ledgers, one source of truth for each. The talent ledger records what the workforce did. The governance ledger records what Maya's Identic AI did on Maya's behalf. They share a join key (the approval ID, the source issue ID, the Worker ID), so an analyst can correlate them (which decisions affecting Worker X over time were made by Maya herself vs Claudia on her behalf?), but they are distinct sources of truth maintained separately. Maya owns the governance ledger; the company owns the talent ledger.
The schema, concretely. Every row in the governance ledger captures:
governance_ledger
├── ledger_id -- primary key for this governance row
├── timestamp -- ISO 8601, millisecond resolution
├── approval_id -- joins to talent_ledger / activity_log
├── source_issue_id -- joins to the originating issue (refund request, hire proposal, etc.)
├── principal -- "owner_identic_ai" (always: this ledger is Identic-AI-only)
├── acting_on_behalf_of -- the human owner's ID (Maya's ID)
├── action_taken -- approve / request_revision / surface_to_owner / decline
├── confidence -- 0.0 to 1.0: Claudia's self-rated confidence
├── layer_source -- "standing_instruction" / "per_decision_feedback" / "derived_pattern"
├── layer_reference -- the specific instruction or pattern that triggered (FK)
├── reasoning_summary -- 1-2 sentences explaining why
├── override_status -- null initially; "overridden_by_owner" if Maya later corrected
└── override_reason -- if overridden, Maya's stated reason (joins back as Layer 2 feedback)
The field names match the worked example in Part 4's Decision 6: ledger_id, approval_id, principal, acting_on_behalf_of, action_taken, confidence, layer_source, layer_reference, reasoning_summary, override_status, timestamp.
A sample week's entries. What four rows from Maya's governance ledger might look like, in plain English:
| Time | Action | Why | Layer | Confidence |
|---|---|---|---|---|
| Mon 09:14 | Approved refund of $890 to customer C-3421 | Customer has account-age 3.2 years, no prior refunds, matches my "long-tenure, low-prior" pattern (Layer 3) | derived_pattern | 0.91 |
| Mon 11:02 | Surfaced to Maya: refund of $1,400 to customer C-7755 | Customer has 2 prior refunds in the last 4 months; your standing instruction (Layer 1: "more than 3 prior refunds, always surface") almost triggers; I surfaced conservatively at 2 | standing_instruction | 0.78 |
| Tue 03:11 | Approved budget override of 12% for Worker W-tier1-eu-3 | Worker is in month 4, override is for handling a documented incident from the activity log, auto-approve per delegated envelope | standing_instruction | 0.96 |
| Wed 16:48 | Surfaced to Maya: hire proposal for Compliance Specialist | No prior hire of this type in the talent ledger; no learned pattern; defaulting to surface | derived_pattern (absence) | 0.45 |
Each row is queryable, joinable, exportable. Maya doesn't read this raw; her weekly summary is generated from it.
The weekly review pattern. Maya reads the governance ledger weekly the same way the Course Seven board reads the talent ledger: not row by row, but as a summary. Claudia produces a weekly digest sent to Maya's Telegram. A realistic example:
Weekly Governance Summary, Week of May 6-12
Totals: 142 decisions handled. 134 auto-approved (94%). 8 surfaced to you (6%). You overrode 1 of my auto-approvals (0.7%).
Breakdown by category:
- Refunds: 97 (94 approved, 3 surfaced)
- Budget overrides: 28 (27 approved, 1 surfaced)
- Substrate migrations: 12 (12 approved)
- Policy-edit confirmations: 5 (2 approved, 3 surfaced)
Your override this week: refund of $1,847 to customer C-4429 (Tuesday). Your comment: "this customer has multiple prior refunds, should have surfaced." I've updated the pattern: for customers with 2+ prior refunds in the last 6 months, I'll now surface regardless of amount. (Layer 2 feedback joined.)
Confidence flags I'd like you to glance at:
- Two refunds I approved at confidence 0.62-0.68 (lower than my usual 0.85+ threshold). Both involved unusual patterns I haven't seen often. They're in the ledger at rows G-2204 and G-2241 if you want to spot-check.
- One derived pattern is starting to emerge: refund timing seems to vary by day of week in your decisions. I'm not acting on this yet, too early.
Next week's expected load: based on the workforce's current size and the past 4 weeks, expect ~150 decisions. No major projects changing my expected workload.
This is the form factor Maya actually consumes. Not row-by-row review, not raw logs, a structured weekly summary with the patterns, the exceptions, and the confidence flags surfaced.
The weekly review is what keeps Maya in the loop at the right scale. She doesn't read every decision; she reads the patterns of decisions and the exceptions. The governance ledger turns six months of Claudia's actions into a queryable, summarizable audit that Maya can act on, not just a blob of history.
An example SQL query Maya might run. Suppose a customer-success team member asks Maya about a series of refund decisions on a single customer. Maya can query her ledger:
SELECT timestamp, action_taken, reasoning_summary, layer_source, override_status
FROM governance_ledger
WHERE source_issue_id IN (
SELECT issue_id FROM activity_log
WHERE customer_id = 'C-4429'
)
ORDER BY timestamp;
The result is the full audit of every governance decision Claudia made on issues touching that customer. Maya can verify Claudia handled the customer correctly, identify whether any decisions should have come to her, and feed the learnings back. The governance ledger is what makes delegated governance auditable rather than just operational.
Paste this into your AI coding assistant: "I'm designing the governance ledger schema for an Owner Identic AI. The schema from Concept 11 of Course Eight is a reasonable starting point. Given the schema, design three additional queries Maya might want to run beyond the customer-issue example. For each query, write the SQL and explain what operational question Maya is answering. The queries should be the kind of thing Maya would actually want to ask during a weekly review or when investigating a specific concern, not synthetic exercises."
What you're learning: the governance ledger isn't valuable just because it logs decisions; it is valuable because Maya can interrogate it. The queries you'd want exist on a spectrum from "what did Claudia do this week" (weekly review) to "did Claudia handle this specific situation correctly" (investigation) to "are Claudia's learned patterns drifting" (calibration check). Knowing which queries you'd write tells you what you actually expect to use the ledger for.
Bottom line: the governance ledger is the audit stream of every decision Claudia made on Maya's behalf, append-only, queryable, summarized weekly for Maya. It is the parallel to Course Seven's talent ledger: two audit streams, one source of truth for each, joined by approval and issue IDs. The weekly summary is what makes the architecture self-monitoring at the owner's scale; Maya reads patterns and exceptions, not individual rows. The SQL queryability is what makes investigation tractable when Maya needs it.
Concept 12: When the Identic AI's judgment and the owner's diverge
The most important operational concept in Part 5 is how to handle the case when Maya disagrees with Claudia. A reader's instinct, after Concept 11, might be that this is a failure mode: Claudia did something wrong, Maya corrects it, ideally it stops happening. That instinct is wrong. Disagreement between Maya and Claudia is a healthy signal, and the architecture is designed to make it productive rather than punishing.
Why disagreement is healthy. Three reasons:
First, no derived pattern is right on the first attempt. Claudia's Layer 3 patterns (Concept 10) are learned approximations. Some will be good after 50 decisions; some will be wrong after 500. The only way to discover which is which is to watch where they fail. If Claudia and Maya never disagree, Claudia hasn't yet been pushed into novel territory; the patterns haven't been stress-tested. Disagreement is the signal that the boundary of Claudia's reliable judgment has been touched.
Second, Maya's judgment evolves. Maya in month one and Maya in month six are not the same person: she's learned things about her business, the market has changed, the workforce has grown. The patterns Claudia learned in month one will gradually become out of date. Disagreement is the signal that Maya's judgment has moved and Claudia needs to catch up.
Third, the recalibration loop is itself a teaching moment. When Maya overrides Claudia, she usually gives a one-sentence reason ("this customer has prior refunds, should have surfaced"). That reason joins the per-decision feedback layer (Layer 2 from Concept 10). The next time a similar pattern arises, Claudia weighs the new feedback against the old pattern. The override is not just a correction; it is training data Claudia uses to refine her future behavior.
The escape valve. Architecturally, Maya can always override Claudia. When Maya reverses one of Claudia's decisions, her override is a fresh board action: it writes a new activity_log row (a board action, exactly like any owner-human decision) and it sets override_status to overridden_by_owner on Claudia's original governance_ledger row, with Maya's reason. Both the original Identic AI decision and Maya's correction are preserved, and the governance_ledger is what tells the full story, since Paperclip's own activity_log records Claudia's decision and Maya's override identically as board actions. There is no failure mode where Maya is locked out of her own company by her Identic AI. The architecture's commitment is that the human is always recoverable.
The unhealthy patterns. Three patterns are warning signs:
- Sustained, high-frequency divergence. If Maya is overriding Claudia 20%+ of the time, the configuration is wrong. Either the delegated envelope is too broad (Claudia is acting on decisions she shouldn't), or Claudia's learned patterns are systematically miscalibrated. Time to revisit Decision 4.
- Maya stopping reading the governance ledger. If Maya is too busy to read the weekly summary, the audit trail becomes dead weight. The architecture only works if Maya stays in the loop at the right scale; if she opts out entirely, she is back to de facto auto-approval, the wrong response A from Concept 1.
- Claudia surfacing too much. The mirror failure: if Claudia surfaces 30%+ of decisions to Maya, the scaling property breaks. Claudia is being too conservative; the delegated envelope or her confidence thresholds need to be relaxed.
As a rough heuristic, and the numbers should be treated as a starting orientation rather than a benchmark to optimize against, a healthy operational state looks something like 90-95% of decisions handled autonomously by Claudia, 5-10% surfaced to Maya, under 5% overrides. The exact numbers vary by company, by Maya's risk tolerance, and by the type of work the workforce is doing. The numerical ranges above are illustrative, not measured against a published dataset. What matters more than hitting specific percentages is the shape: most decisions autonomous, a meaningful but small fraction surfaced, overrides rare enough that each one is read closely.
For each scenario below, classify it as healthy, unhealthy: configuration, unhealthy: disengagement, or unhealthy: over-conservative:
- After six months, Claudia auto-approves 92% of decisions; Maya overrides 2%; Maya reads the weekly summary every Friday.
- After three months, Claudia surfaces 35% of decisions to Maya; Maya processes them within hours.
- After eight months, Claudia auto-approves 98% of decisions; Maya hasn't read the weekly summary in a month.
- After two months, Claudia auto-approves 88% of decisions; Maya overrides 22% of those.
- After four months, Claudia auto-approves 91% of decisions; Maya overrides 4%; Maya occasionally adds a Layer 1 instruction in response to override patterns.
Answers: (1) healthy: within target ranges, owner engaged. (2) unhealthy: over-conservative: surfacing rate is too high; Claudia's confidence thresholds or delegated envelope need to be loosened. (3) unhealthy: disengagement: Maya has effectively delegated everything; this is de facto auto-approval, the Wrong Response A from Concept 1. (4) unhealthy: configuration: 22% override rate means the delegated envelope is wrong or Claudia's patterns are systematically miscalibrated; time to revisit Decision 4. (5) healthy: the active recalibration via standing-instruction edits is exactly the loop Concept 12 advocates.
Bottom line: when Maya and Claudia disagree, the architecture treats it as healthy signal, not failure. The override updates Claudia's per-decision feedback; the patterns refine over time; Maya remains in control because the escape valve is always available. The unhealthy patterns to watch for are sustained high-frequency divergence (configuration wrong), Maya disengaging from the governance ledger (architecture defeated), and Claudia over-surfacing (scaling property broken). A healthy state is ~90-95% autonomous, ~5-10% surfaced, under 5% overridden.
Part 6: The honest frontier
The course has been honest throughout about which parts of the architecture are shipped vs which are open research. Part 6 is the explicit treatment of the open frontier: three Concepts on what isn't fully solved in May 2026, what the path forward looks like, and where Course Eight's eighth-invariant claim genuinely lands vs where it gestures.
Concept 13: Self-sovereign memory, where the owner's accumulated judgment lives long-term
Concept 5 established that Maya's OpenClaw session lives on her local filesystem by design, not as an opt-in. That property satisfies Tapscott's self-sovereignty commitment for a single device. What it does not satisfy, fully, is the multi-device case.
One clarification before continuing: interface reachability is not session location. OpenClaw's 50+ integrations (including 15+ chat channels: WhatsApp, Telegram, Discord, Slack, Signal, iMessage) mean Maya can reach her Identic AI from anywhere she has her phone or laptop and her chat apps. Telegram on her phone, Telegram on her laptop, Discord in a coworker's office: all are interfaces to the same Identic AI. But interface reachability and where the session itself lives are different questions. The chat apps are stateless proxies; the session (Maya's accumulated context, learned patterns, signing keys) lives on whatever device is running the OpenClaw daemon. By default that's one device. The multi-device case below is about how the session gets to a second device, not about how Maya reaches it.
The three architectural options, today. When Maya wants her Identic AI to follow her across her laptop, her phone, her work machine, her home Mac, there are three patterns shipped in May 2026:
- Single-device sovereignty (the default). The session lives on one device. Maya uses OpenClaw from that device only. Self-sovereign in the strict sense; constrained in the practical sense.
- User-hosted sync. Maya runs her own sync layer: a private server she controls (a Raspberry Pi at home, a small VPS), running an open-source sync protocol. Her OpenClaw on each device pushes and pulls from the sync server. Still self-sovereign in the strict sense (Maya owns the server); higher operational burden.
- Encrypted-with-user-key cloud sync. Maya uses a third-party sync service, but the session is encrypted client-side with a key only Maya holds. The cloud provider stores ciphertext; the keys never leave Maya's devices. Self-sovereign in practice if the encryption is correctly implemented, but Maya is trusting the encryption implementation and the absence of side-channel leaks.
OpenClaw supports the first option out of the box and gestures at the second and third, but none of the three is a clean, no-tradeoff solution for the multi-device case in May 2026. Tapscott calls this "reinventing the AI stack." He's right.
What's actually open research. Three questions don't yet have shipped answers:
- How does the user's accumulated context survive runtime obsolescence? If OpenClaw the project shuts down in 2030 and the user has 4 years of accumulated session data, can they migrate to a different runtime? Today: in principle yes (the data is on their filesystem, human-readable); in practice the next runtime would have to import the OpenClaw session format, which assumes the format is stable. This is a long-term sovereignty question.
- What about the cryptographic identity? Maya's signing key represents Claudia. If Maya wants to move to a different runtime that uses a different identity protocol, what happens to Claudia's continuity at Paperclip? The relationship between the runtime's notion of Identic AI identity and the wider ecosystem's notion of delegated identity is not standardized.
- What about backups and recovery? If Maya loses her devices, her backups, and her primary credentials simultaneously, she loses her Identic AI. The pattern that ships in May 2026 is Maya is responsible for her own backups. Whether the architecture should support a stronger recovery story (Shamir-split keys, social recovery) is an active design question.
Course Eight teaches the architectural commitments (filesystem-local by default, user-owned keys, no platform lock-in) and names where the operationalization is partial. The course does not pretend the multi-device, multi-runtime, long-term-sovereignty story is fully solved.
Bottom line: the single-device self-sovereignty story is solved by OpenClaw's filesystem-local session design. The multi-device case has three patterns, each with tradeoffs. The long-term sovereignty case (runtime obsolescence, identity continuity, recovery) is genuinely open research as of May 2026. Course Eight teaches the architectural commitments and names what's not yet shipped.
Concept 14: Value alignment beyond pattern-matching
Concept 10 distinguished three layers of Claudia's accumulated context: explicit standing instructions, per-decision feedback, and derived patterns. The third layer is the part Course Eight is least confident teaching. Pattern-matching is not value alignment, and the distinction matters.
Patterns are surface; values are structure. Claudia might learn that "Maya tends to approve refunds in the $500-$2,000 range when the customer's account is more than two years old." That's a pattern. The underlying value Maya is acting on might be "long-tenure customers represent low fraud risk and high churn risk, so I prioritize their satisfaction": a structural belief about customer-business dynamics. Two different policies could produce the same pattern: one based on Maya's customer-relationship values, another based on, say, an arbitrary heuristic Maya picked up from a podcast. Claudia learning the pattern is genuinely helpful for routine decisions; Claudia confusing the pattern with the value is dangerous when the pattern breaks (a long-tenure customer attempts a fraudulent refund; the pattern says "approve"; the value says "verify first").
Where this matters operationally. Pattern-matching works well when the world is stable. It fails in the cases the architecture most needs to handle correctly: novel situations, edge cases, the moments Maya's judgment is most valuable. An Identic AI that pattern-matches her past decisions is good at routine; an Identic AI that's modeled her values is good at novel. The latter is what Tapscott's transcript gestures at when he talks about "reflecting your values." It is not what shipped in May 2026.
The research preview state. Anthropic and other labs have research previews that go further: value-elicitation interviews, principle-extraction from past decisions, explicit "what would you do if..." dialogues that surface the underlying structure. These exist; none of them is curriculum-ready in May 2026 in the sense that Course Eight could teach them as a stable pattern. The Course Eight commitment is to teach the pattern-matching layer well (Concepts 10-12) and to name the value-alignment layer as the open frontier.
What this implies for the delegate primitive. Course Eight's claim is that the Owner Identic AI removes the owner-attention bottleneck. The claim is true at the pattern-matching layer: Claudia handles routine decisions correctly often enough to scale Maya's workforce well past the 10-40 Worker ceiling. The claim is not fully true at the value-alignment layer: Claudia will sometimes make a novel decision Maya would have made differently, and the architecture has to assume this and design for recalibration (Concept 12). The delegate primitive scales the AI-native company by an order of magnitude, not infinitely. Past some scale (hundreds of Workers, thousands?), the residual novel-decision rate becomes large enough that pure pattern-matching delegation doesn't keep up, and value-alignment research has to ship for the architecture to scale further. We don't know exactly where that ceiling is; the course is honest about it being out there.
Bottom line: Claudia's pattern-matching layer (Concept 10) handles routine well; her value-alignment layer (the open frontier) handles novel poorly. Course Eight's eighth-invariant claim is true at the pattern-matching layer: Maya can scale her workforce past the previous attention ceiling. The claim is partial at the value-alignment layer; there is a ceiling somewhere out there beyond which pure pattern-matching delegation stops scaling. Where that ceiling sits depends on value-alignment research that hasn't shipped yet. The course is honest about the partial.
Concept 15: What's next, the Identic AI economy, the eval discipline, and the closing of the architectural primitives
Course Eight is the last course in the architectural sequence of the Agent Factory track. Course Nine, which teaches the cross-cutting discipline of eval-driven development, closes the track itself. Concept 15 closes Course Eight and points at what comes next at two scales: within the track (Course Nine and the eval discipline) and beyond it (the open frontiers of the Identic AI economy). Three things to name: where the Owner Identic AI architecture leads next, what the other Identic AI use cases (customer-side, employee-side, peer-to-peer) actually look like as the field matures, and what the seven-invariant architecture becomes once the eval discipline of Course Nine is wrapped around it.
The Identic AI economy as Tapscott describes it. From the HBR transcript: "I think that we will spend a lot less time in execution related activities. ... AI agents can handle coordination analysis, scheduling, flow through all the other stuff about execution, they can do that at machine speed. Execution increasingly becomes commoditized. And so as a manager and executive, what differentiates a firm is no longer your ability to execute, but your ability to think big picture, to choose the right goals, to define purpose, to make high quality strategic judgments." The architectural shape Tapscott is describing is a network of Identic AIs: owner-side at every AI-native company, employee-side at every workforce, customer-side at every individual, interacting under signed credentials, with humans intervening only on the consequential or strategic. Course Eight builds one node of that network completely; the network itself is what Course Eight points at.
Customer-side Identic AI as the sidebar use case. A customer Sarah, in 2027 or 2028 or 2030, has her own OpenClaw running on her laptop. She wants to interpret a contract clause from Maya's company. She messages Sarah-OpenClaw (the customer's Identic AI, not Maya's) via WhatsApp: "can you talk to ContractCo about clause 7.3 of my contract?" Sarah-OpenClaw signs a request with Sarah's credentials and posts to Maya's company's Manager-Agent. The Manager-Agent verifies Sarah's identity (passkey plus signature, the same primitives Course Eight teaches for Maya's case but across an untrusted boundary), routes to the Legal Specialist, returns the answer. Sarah's Identic AI and Maya's workforce meet as peers in the network, mediated by signed credentials, with neither side owning the whole interaction. This is the architecture Course Eight enables but does not deliver. The trust-delegation primitives in Concepts 7-9 transfer; the cross-party trust model is the missing piece.
Employee-side Identic AI. A Worker in Maya's company, an employee rather than a hired AI Worker, has their own Identic AI that helps them draft emails, prepare for meetings, manage their tasks. The employee Identic AI talks to the company's Workforce and to the employee's own Identic AI peers. This is the use case OpenClaw already serves well for individual users: a personal AI living on one person's machine, reachable through their chat apps, is exactly the single-node configuration OpenClaw ships. Course Eight didn't teach it because the load-bearing case was the owner's; but the architecture extends naturally.
Peer-to-peer Identic AI. Two individuals' Identic AIs interacting directly: Maya's Identic AI talks to her co-founder's Identic AI to coordinate a meeting; Sarah's Identic AI negotiates a refund with Maya's Identic AI directly. Tapscott's end-state is humans in the loop only on the strategic; routine interaction happens between AIs. This is more speculative than the others; the trust-delegation problem at the peer-to-peer layer has open subproblems.
The seven-invariant thesis, operationalized. What was speculative when Course Three opened is concrete after Course Eight closes. Using the canonical thesis ordering and naming:
| Invariant | What it requires | Course that operationalizes it in depth |
|---|---|---|
| 1. The human is the principal | Human intent, budget, authority envelope, accountability | Foundational across the track |
| 2. Every human needs a delegate | A personal agent holding context, judgment, and authority envelope | Course Eight |
| 3. The workforce needs a management layer | Hire, assign, govern, observe, retire: the workforce OS | Course Six (Paperclip) |
| 4. Each Worker picks its own engine | Per-Worker runtime matched to the job | Course Three (introduces the engine choice) |
| 5. Every Worker runs against a system of record | Authoritative store reachable via MCP | Course Four |
| 6. The workforce is expandable under policy | Hiring as a callable capability | Course Seven (Claude Managed Agents) |
| 7. The workforce runs on a nervous system | Events, durability, flow control under envelope | Course Five (Inngest) |
Seven invariants, six courses operationalizing them in depth (Three, Four, Five, Six, Seven, Eight; Invariant 1 runs through all of them as the foundational commitment to human principal). The architecture is operationalized. An AI-native company that has all seven is scalable, auditable, governable, and built around primitives that aren't going to be obsoleted by the next runtime release, because the architecture commits to patterns, not vendors. What Course Nine then adds is the eval-driven discipline that turns "built and running" into "measurably trustworthy." The architecture is complete after Course Eight; the curriculum is complete after Course Nine.
What's next, within the track and beyond it. Course Eight gestured at three open research areas: self-sovereign memory across runtimes (Concept 13), value alignment beyond pattern-matching (Concept 14), and the cross-party trust model for customer-side and peer-to-peer Identic AI (Concept 15). These are not curriculum yet; they are the active research frontier. A reader finishing Course Eight has the framework to evaluate solutions to these problems as they ship: what makes a value-alignment proposal credible, what makes a cross-party trust architecture self-sovereign, what makes a long-term sovereignty story honest.
The architectural primitives are operationalized after Course Eight; the track continues into Course Nine. Course Nine teaches what was deliberately deferred across the architectural sequence: the cross-cutting discipline of eval-driven development. Every Worker built in Courses 3-7, every hire authorized in Course Seven, and every delegated decision Claudia makes in Course Eight has the same property: none of them is measurably trustworthy until its behavior has been evaluated, traced, graded, and improved. Course Nine takes up that discipline using a layered eval stack (trace grading for agent behavior, repo-level evals for development workflow, RAG evals for the knowledge layer, production observability). The analogy: test-driven development was the closing discipline of any SaaS engineering curriculum; eval-driven development is the closing discipline of any agentic-AI curriculum. Architecture (Courses 3-8) plus discipline (Course 9) equals the full Agent Factory track.
Bottom line: Course Eight operationalizes Invariant 2 of the Agent Factory thesis, the delegate, in the specific configuration that lets an AI-native company scale past its founder's attention. The seven architectural invariants are all operationalized in depth across Courses 3-8; Course Nine adds the cross-cutting discipline of eval-driven development that turns the architecture into something measurably trustworthy in production. The open research frontiers named in Concepts 13-15 (self-sovereign memory, value alignment, cross-party trust) sit beyond both the architecture and the discipline; they are the next decade's work.
How to actually get good at this
Reading Course Eight does not make you good at building Owner Identic AI. Using it does, and the path looks like this: four phases, in order, over a few weeks.
Phase 1: Live with an Identic AI in low stakes (week 1). Install OpenClaw on your own machine, your real machine, not a sandbox. Onboard a real persona: yours, not Maya's. Message it through your real chat app about your real day. Give it a few standing instructions about how you want it to handle your real email or your real calendar or your real task list. This phase is not about governance. It's about building intuition for what an Identic AI feels like when it's actually living with you, when its memory accumulates, when it surprises you with a proactive nudge. You'll be ready for the governance use case only after this phase lands; jumping straight to delegated governance without the personal-AI intuition produces calibration mistakes.
Phase 2: Watch where it gets you wrong (week 2). Pay attention to the moments your Identic AI surfaces something you'd have ignored, or auto-acts on something you'd have wanted to see. Each of these is a signal: either the standing instructions are off, or the patterns it's learning are misfiring. Correct in plain language; watch the next similar case to see if the correction stuck. Your sense of when to trust your Identic AI and when not to is built by watching it act on your real decisions, not by reading about Maya.
Phase 3: Apply the architecture to governance (week 3+). If you're an AI-native-company owner, configure your Identic AI as a governance delegate against your real Paperclip workforce. Use the conservative envelope from Decision 4 as your starting point. Set the dry-run mode (Decision 3's --dry-run flag) for the first three days; read what Claudia would have done; build calibration confidence before enabling real signing. Then go live with real signing for a week; watch the governance ledger; expect to refine standing instructions actively. Invariant 2, the delegate, becomes a property of your operation, not a concept from a course. That's when Course Eight has done its work.
Phase 4: Steady state and active calibration (month 2+). After roughly 4 weeks of operation, the rhythm should feel invisible: you check in with your Identic AI a few times per day on surfaced approvals, read the weekly governance summary in 10 minutes, occasionally edit a standing instruction. The architectural patterns from Course Eight are still doing the work, but you experience them as "my AI handles routine, I handle consequential, I review weekly." That's the point.
A failure mode to watch for in your own use. The most common way readers misuse Course Eight is opting out of the weekly review. The architecture's safety property, that Maya remains the principal with full audit recovery, only holds if Maya stays in the loop at the right scale. If you find yourself ignoring your governance summary for two weeks running, you've drifted into Wrong Response A from Concept 1 (de facto auto-approval). Recover by re-engaging with the weekly review before changing anything else.
If you're not an AI-native-company owner: the patterns still apply at smaller scales. An Identic AI handling your personal email approvals, your calendar conflicts, or your information triage uses the same three-layer learning model and the same recalibration loop. Course Eight teaches the architecture; the surface you apply it to is yours to choose.
Quick reference
The 15 Concepts in one line each
- An AI-native company stops scaling at the owner's attention. The hiring API is callable; the owner is not. Somewhere between 10 and 40 Workers, the owner becomes the bottleneck.
- Identic AI is the architectural answer, per Tapscott's HBR framing: personalized + value-reflecting + extension-of-self + self-sovereign + persistent memory.
- Owner Identic AI is the load-bearing case. Course Eight teaches one configuration well; Concept 15 names the others (customer-side, employee-side, peer-to-peer) as the frontier.
- OpenClaw is the runtime. Open source, user-owned, local-machine, chat-app-reachable. Verified from openclaw.ai.
- The session is Maya's. Her accumulated context lives on her filesystem in human-readable files. Self-sovereignty operationalized.
- Chat apps as the interface layer. The Identic AI lives where the owner already lives: 50+ integrations, including 15+ chat channels (WhatsApp, Telegram, Discord, Slack, Signal, iMessage), not a separate app.
- Two principals, distinct identities. Paperclip's
activity_logrecords every approval as a board action; the course's owngovernance_ledgercarries the owner-human vs owner-identic-ai distinction. Truthful auditing requires distinguishing. - Signed delegation from local credentials. Maya's Identic AI signs with a key Maya owns; Paperclip verifies; revocable from any device Maya still controls.
- The two-envelope intersection. Maya's owner-authority envelope (ceiling) intersected with Claudia's delegated envelope (subset Maya chose) equals what actually executes.
- Three layers of accumulated context: explicit standing instructions, per-decision feedback, derived patterns. The third is probabilistic; honest Identic AIs say so.
- The governance ledger. Append-only audit of every decision the Identic AI made on the owner's behalf. Weekly summary to Maya.
- Disagreement is healthy signal. Maya overrides Claudia and the pattern updates. A sustained 20%+ override rate means the configuration is wrong.
- Self-sovereign memory at the multi-device and long-term layer is open research. Course Eight teaches the commitments and names the partial.
- Patterns are not values. Claudia handles routine well; novel poorly. The delegate primitive scales the workforce by an order of magnitude, not infinitely.
- The architectural sequence closes after Course Eight; the track closes after Course Nine. Seven architectural invariants operationalized in depth across Courses 3-8; eval-driven development (Course Nine) is the cross-cutting discipline that turns the architecture into measurably trustworthy behavior. Customer-side and peer-to-peer Identic AI are the open research frontier beyond both.
Command quick-ref
| Want to... | OpenClaw command |
|---|---|
| Install on a new machine | curl -fsSL https://openclaw.ai/install.sh | bash or npm i -g openclaw |
| Onboard for the first time | openclaw onboard |
| Check the install | openclaw status |
| List installed skills | openclaw skills list |
| Install a new skill | openclaw skills install <name> (from ClawHub) |
| Update to latest | openclaw update --channel stable |
File location quick-ref
| What | Where |
|---|---|
| Session and accumulated context | ~/.openclaw/workspace/ (the agent workspace is OpenClaw's persistent memory) |
| Persona and durable context | ~/.openclaw/workspace/USER.md, MEMORY.md, and workspace context/ files |
| Identic AI signing key | ~/.openclaw/keys/identic-ai.pem (or platform keystore) |
| Locally-authored skills | ~/.openclaw/workspace/skills/<name>/SKILL.md |
| Logs | via openclaw logs (consult the OpenClaw docs for on-disk log locations) |
Extension type decision tree
Need the owner's Identic AI to handle a routine decision class autonomously?
→ Configure the delegated envelope (Decision 4 / Concept 9) and a standing instruction.
Need to teach the Identic AI a new pattern from a recent override?
→ That's automatic: per-decision feedback joins Layer 2. Concept 10.
Need the company workforce to recognize a new Identic AI principal?
→ Identity registration at Paperclip (Decision 4 / Concept 8).
Need to revoke a compromised Identic AI?
→ Revocation flow (Decision 7 / Concept 8). From any device Maya still controls.
When something feels wrong
The owner is reading too many approvals → delegated envelope is too narrow.
Concept 9. Expand the envelope conservatively, watch for new failure modes.
The Identic AI is auto-approving things the owner would have surfaced → patterns
are miscalibrated. Concept 12. Use the override; the next pattern update incorporates it.
The Identic AI is unreachable → check chat-app integration first (Decision 1),
then OpenClaw daemon status (`openclaw status`), then the Paperclip-integration
skill's config (Decision 3).
The audit trail is incomplete → governance ledger writes are silently failing.
Concept 11. Check Paperclip's activity_log for the missing rows; the Identic AI's
side of the write may have succeeded while Paperclip's side failed.
References and further reading
- Tapscott, Don. (2026). You to the Power of Two: Redefining Human Potential in the Age of Identic AI. The source for the "Identic AI" framing Course Eight inherits.
- HBR IdeaCast Episode 1066, "With Rise of Agents, We Are Entering the World of Identic AI" (February 17, 2026). Don Tapscott interviewed by Adi Ignatius. hbr.org/podcast/2026/02/with-rise-of-agents-we-are-entering-the-world-of-identic-ai
- OpenClaw official site: openclaw.ai
- OpenClaw documentation: docs.openclaw.ai
- OpenClaw GitHub: github.com/openclaw/openclaw
- Paperclip documentation: docs.paperclip.ing
- Course Seven (the direct prerequisite): From Fixed to Dynamic Workforce
- Course Six (the management plane): From One Worker to a Workforce
- Course Five (the operational envelope): From Digital FTE to Production Worker
- Course Nine (the next course): Eval-Driven Development for Agentic AI, the cross-cutting discipline that turns the Courses 3-8 architecture into measurably trustworthy production behavior
Course Eight is the deep operationalization of Invariant 2 of the Agent Factory thesis: the delegate that holds the human's context, judgment, and authority envelope. Course Nine then adds the cross-cutting discipline of eval-driven development that turns every Worker, every hire, and every delegated decision into something measurably trustworthy. What you build with both, the architecture from Courses 3-8 and the discipline from Course 9, is what comes next.