Skip to main content
Updated Mar 07, 2026

Chapter Summary

This chapter began with three layers: a Cowork plugin is a self-contained directory of components (the format Anthropic designed), knowledge-work plugins use that format to turn a general-purpose agent into a domain specialist (what the official plugins do), and Panaversity's enterprise readiness evaluation model assesses whether the result is production-ready. It ends with a complete deployment architecture. The nine lessons between those two points did not add complexity for its own sake — each one answered a question that the previous lesson raised. The definition raised the question of what the intelligence layer actually looks like. The intelligence layer raised the question of how the plugin infrastructure is configured. The infrastructure raised the question of what happens when the SKILL.md and higher-level policies conflict. That question required the context hierarchy. The context hierarchy pointed to the governance layer. The governance layer required the ownership model to be useful in practice. And the ownership model opened the question of what happens when the expertise encoded in a SKILL.md is generalisable beyond a single organisation.

That chain is the chapter. Understanding it as a chain — not as nine separate lessons — is the synthesis this summary is for.

The Architecture in Sequence

Each lesson answered a specific question. Each answer led directly to the next question.

LessonQuestion AnsweredKey Output
L01: What a Plugin Actually IsWhat precisely is a Cowork plugin?Plugin package components; enterprise readiness evaluation model
L02: The Intelligence LayerWhat is the knowledge worker actually responsible for?PQP Framework: Persona, Questions, Principles
L03: The Plugin InfrastructureWhat does the rest of the plugin package contain?plugin.json (manifest); .mcp.json (connectors); commands; agents; settings
L04: The Three-Level Context SystemWhy do SKILL.md instructions sometimes fail?Platform → organisation → plugin hierarchy; silent override; diagnostic sequence
L05: The PQP Framework in PracticeWhat does a production-quality SKILL.md look like?Annotated financial research SKILL.md; source integrity; uncertainty calibration
L06: The MCP Connector EcosystemWhat enterprise systems can the agent actually access?Marketplace connectors; custom commissioning process; timeline expectations
L07: The Governance LayerHow does trust in a deployed agent accumulate?Permissions; audit logging; shadow mode (30d/95%); HITL gates
L08: The Division of ResponsibilityWho is responsible when something goes wrong?Three-way ownership model; layer independence; SKILL.md maintenance as ongoing discipline
L09: The Cowork Plugin MarketplaceWhat happens when the expertise is generalisable?Vertical skill packs; connector packages; transferability test

Three Insights That Connect the Architecture

Reading the nine lessons as a sequence reveals three insights that no individual lesson states on its own.

The first is that the SKILL.md is not one component among many — it is the component that everything else serves. The manifest and settings configure the environment in which the SKILL.md operates. The connectors supply the data the SKILL.md instructs the agent to use. The governance layer enforces the boundaries the SKILL.md defines. Remove the SKILL.md and you have deployment infrastructure without intelligence. A well-written SKILL.md makes the rest of the architecture useful. A poorly written one makes it unreliable regardless of how correctly the other components are configured.

The second insight is that the knowledge worker's role is authorial, not technical. Writing the SKILL.md requires domain expertise, not programming ability. Reviewing the .mcp.json to verify connector scope requires infrastructure literacy, not systems engineering. Designing the shadow mode rubric requires knowing what accuracy means in the domain, not statistical training. Identifying the HITL gates requires understanding which decisions carry professional accountability, not governance theory. The chapter's architecture was designed with a deliberate non-negotiable: the person who holds the domain expertise should be able to deploy without depending on technical intermediaries for the core intelligence layer.

The third insight is that governance is not the end of the deployment story — it is the beginning of the trust story. Shadow mode, audit trails, and HITL gates do not exist to limit what an agent can do. They exist to produce the evidence that allows a sceptical compliance function, a cautious general counsel, or a regulated industry's oversight body to permit the agent to do more. The 30-day shadow mode period produces the corpus that justifies autonomous operation. The audit log turns a potential compliance incident into a documented, defensible process. Governance is what converts a promising demonstration into a deployable system.

The Component That Determines Everything

Of the eight components in the ownership table, one is owned entirely by the knowledge worker, is written entirely in plain English, determines the agent's identity, scope, and operating logic, and is the component most likely to drift from production reality without disciplined maintenance. That component is the SKILL.md.

The chapter taught the architecture around it. The PQP Framework — Persona, Questions, Principles — gave the structure. The annotated financial research example in Lesson 5 showed what production quality looks like. The ownership model in Lesson 8 established that maintaining it is an ongoing professional responsibility, not a one-time authorship task.

What the chapter did not teach is how to extract the domain expertise that goes into it. Writing a production-quality SKILL.md requires articulating, often for the first time in explicit form, the professional standards, decision-making logic, escalation thresholds, and quality criteria that ordinarily exist as institutional memory and professional judgement. This is the hardest part of the process — not because the SKILL.md is technically complex, but because making tacit expertise explicit is genuinely difficult work. The chapter showed the structure. Chapter 16 teaches the methodology for producing the content.

Self-Assessment Checklist

Before continuing, verify that you can answer these questions with specificity. Generic answers indicate a concept that needs review.

  • The plugin package structure: Can you name the main components of a plugin package, their owners, and what each one does — without conflating the intelligence layer with the infrastructure layer?
  • The PQP Framework: Can you describe what each of the three SKILL.md sections does and explain, for each one, what happens to the agent when that section is missing or poorly written?
  • Source integrity and uncertainty calibration: Can you explain why these are domain-specific principles rather than generic quality standards, and identify what failure mode each one prevents?
  • The three-level context hierarchy: Can you describe the diagnostic sequence and explain why starting at the SKILL.md level is almost always the wrong place to begin?
  • Shadow mode: Can you state the two criteria for transitioning to autonomous operation and explain why the 30-day minimum is not negotiable?
  • The ownership model: Given a described plugin failure, can you assign it to the correct owner without deliberating?
  • The marketplace: Can you apply the transferability test to a body of domain expertise and correctly classify it as publishable or not?

If any of these are uncertain, revisit the relevant lesson before continuing. Chapter 16 assumes the architecture is understood and proceeds directly to the extraction methodology.

What Comes Next

Chapter 16 opens the methodology. Where this chapter gave you the complete architecture of a Cowork plugin and established what a production-quality SKILL.md looks like, Chapter 16 gives you the process for producing one. The Knowledge Extraction Method is a structured approach to making tacit expertise explicit — to taking the professional judgement that exists in a domain expert's head and translating it into the Persona, Questions, and Principles that determine what a deployed agent does.

The architecture does not change. The plugin package structure, the context hierarchy, the governance layer, and the ownership model are the permanent infrastructure. Chapter 16 is about the most critical act within that infrastructure: authoring the document that gives the agent its intelligence.

Try With AI

Use these prompts in Anthropic Cowork or your preferred AI assistant to integrate the chapter's architecture.

Prompt 1: Personal Architecture Mapping

I have just completed Chapter 15 on the enterprise agent blueprint.
I work as [YOUR ROLE] in [YOUR INDUSTRY]. Help me map the full
chapter architecture to a specific workflow I want to automate:
[DESCRIBE THE WORKFLOW IN 2-3 SENTENCES].

Walk me through each architectural element:
1. SKILL.md: What would the Persona, Questions, and Principles
sections need to address for this workflow?
2. Connectors: Which marketplace connectors would I need? Which
systems might require custom commissioning?
3. Governance: What would 95% accuracy mean for this workflow?
What are the natural HITL gates?
4. Ownership: Who in my organisation would own each component?

Identify any gaps where I would need information I do not currently
have to answer one of these questions.

What you're learning: How to apply the complete chapter architecture to a real deployment scenario. This synthesis exercise forces you to use every element — SKILL.md, connectors, governance, ownership — in sequence for a specific workflow, revealing which parts of the architecture you have understood deeply and which remain abstract.

Prompt 2: Comparative Architecture Analysis

Compare two plugin deployments with different governance profiles:
(1) A financial research agent at an asset management firm, operating
under FCA oversight, producing analysis that informs board-level
investment decisions. (2) A project coordination agent at a design
consultancy, producing internal meeting summaries and task assignments
for a team of twelve.

For each deployment, trace through:
- What governance configuration would the administrator need to set?
- What shadow mode criteria would be appropriate?
- Where would the HITL gates be?
- How would the ownership model differ in practice?

Explain why the same architectural framework produces very different
governance profiles for these two use cases.

What you're learning: How the chapter's architecture adapts to context. The plugin package structure, governance layer, and ownership model are consistent across deployments — but their configuration varies significantly based on stakes, regulatory environment, and user profile. Comparing two contrasting cases makes this adaptation concrete rather than theoretical.

Prompt 3: Bridge to Chapter 16

I understand the architecture of a Cowork plugin from Chapter 15.
The component I am least confident about writing is the SKILL.md —
specifically, the Principles section, which requires encoding domain-
specific operating logic.

For my domain of [YOUR PROFESSIONAL DOMAIN], help me surface what I
actually know that would belong in a Principles section:

Ask me five questions that a skilled interviewer would ask a domain
expert to surface tacit knowledge — the kind of knowledge that experts
apply automatically but rarely articulate explicitly. After I answer
each question, help me translate my answer into a candidate Principle
that is specific enough to be actionable (not generic), domain-specific
enough to be meaningful (not universal), and grounded in a failure
mode it prevents (not aspirational).

This is preparation for Chapter 16's Knowledge Extraction Method.

What you're learning: The gap between understanding the SKILL.md's architecture and being able to write one is the gap that Chapter 16 addresses. This prompt simulates the extraction process that Chapter 16 will teach systematically — surfacing tacit expertise through structured questioning and translating it into specific, actionable Principles. Starting the process before Chapter 16 makes the methodology more immediately applicable when you encounter it.

Flashcards Study Aid


Continue to Chapter 16: The Knowledge Extraction Method →