Skip to main content
Updated Mar 07, 2026

Chapter Summary

This chapter began with a diagnosis: enterprise AI stalled because the central problem -- transferring domain expertise into deployable agent instructions -- had no structural solution. It ends with a toolkit. The frameworks you have learned are not independent models to be memorised separately. They form a connected decision system, and understanding how they connect is more valuable than recalling any single framework in isolation.

Here is the logic chain that runs through all eight lessons. Each framework answers a specific question, and the answer to each question determines which framework you need next.

The Decision Chain

The chapter's frameworks connect in a specific sequence. Each step depends on the one before it.

StepQuestionFrameworkLesson
1Why did enterprise AI stall?The Pilot Trap and knowledge transfer diagnosisL01
2What changed to unlock deployment?2026 platform shift (Cowork and Frontier)L02
3Who is central to the solution?Knowledge worker as author and operatorL03
4Which platform fits my context?Cowork vs Frontier decision frameworkL04
5How do I capture value?Four monetisation modelsL05
6Is the organisation ready?Five-level maturity modelL06
7Which domain am I deploying in?Seven professional domain profilesL07
8How do I start?Knowledge question and conversation qualificationL08

The sequence matters. You cannot select a platform (Step 4) without understanding why the knowledge worker is central (Step 3). You cannot frame monetisation (Step 5) without knowing which platform fits the context (Step 4). You cannot have an effective deployment conversation (Step 8) without the vocabulary from every preceding step.

The Frameworks as a Connected System

Three core insights tie the entire chapter together.

First: the problem is knowledge transfer, not technology. The technology arrived years before organisations could use it. What was missing was a structural way to move expertise from the professional's head into a deployable agent. The 2026 platforms solved this by putting the knowledge worker -- not the developer -- in the authoring position.

Second: every deployment decision flows from the knowledge question. Whose expertise, encoded in what form, available to whom, under what constraints? The answer determines platform choice (Cowork for team-level, Frontier for enterprise-wide), monetisation model (success fee for measurable outcomes, subscription for ongoing operations), maturity requirements (Level 2 minimum for pilot, Level 3 for full deployment), and domain profile (which of the seven sections guides the implementation).

Third: qualification before proposal. The maturity model is not an academic framework. It is a filter. A Level 1 organisation needs education. A Level 3 organisation needs governance. Offering the wrong thing at the wrong maturity level wastes everyone's time.

Self-Assessment Checklist

Before moving to Chapter 15, test whether you can answer these questions. If you can answer all of them with specificity -- not generalities -- you have the strategic vocabulary this chapter aimed to build.

  • The Pilot Trap: Can you explain why enterprise AI stalled in 2024-2025 and identify the structural problem (not the technology problem)?
  • Platform Landscape: Can you describe when Cowork is the right choice and when Frontier is the right choice, based on scope, procurement model, and knowledge type?
  • Knowledge Worker Centrality: Can you articulate why domain experts are the most valuable participants, not peripheral supporters?
  • Monetisation: Can you match each of the four models (Success Fee, Subscription, License, Marketplace) to the domain and stakeholder where it fits best?
  • Maturity Assessment: Can you assess an organisation's maturity level and recommend the appropriate type of engagement for that level?
  • Domain Mapping: Can you identify which of the seven domains matches your expertise and name the institutional knowledge you would encode?
  • The Knowledge Question: Can you answer "whose expertise, in what form, for whom, under what constraints" for a specific deployment you care about?
  • Conversation Qualification: Can you assess a stakeholder's readiness and frame value in terms that resonate with their specific role?

If any of these feel uncertain, revisit the relevant lesson before continuing. The deployment chapters (15--29) assume this vocabulary is in place.

What Comes Next

Chapter 15 opens the blueprint. Where this chapter gave you the strategic vocabulary to evaluate, qualify, and frame enterprise AI deployments, Chapter 15 gives you the technical architecture that makes deployment real. You will see what a Cowork plugin looks like from the inside: how your expertise becomes agent instructions, how connectors attach to your organisation's systems, and how governance controls ensure the agent operates within the constraints you defined.

The strategic vocabulary does not become obsolete. It becomes the language you use to explain what the technical architecture is doing and why.

Try With AI

Use these prompts in Anthropic Cowork or your preferred AI assistant to explore these concepts further.

Prompt 1: Personal Application

I have just completed Chapter 14 on the enterprise agentic landscape.
Help me create a personal deployment roadmap. Here is my context:
I work as [YOUR ROLE] in [YOUR INDUSTRY], my organisation is at
approximately Level [your estimate] maturity, and the institutional
knowledge I want to encode is [describe it]. Walk me through the
decision chain: knowledge question, maturity assessment, platform
recommendation, monetisation model, and domain mapping. Give me three
concrete next steps I can take this week.

What you're learning: How to apply the complete decision framework to your specific situation. This synthesis exercise forces you to use every framework from the chapter in sequence, revealing which ones you have internalised and which need review.

Prompt 2: Framework Analysis

Compare two hypothetical deployments: (1) A five-person sales team
at a startup wanting to improve lead qualification, and (2) A
500-lawyer firm wanting to standardise contract review across offices.
For each, trace through the decision chain: What is the knowledge
transfer problem? Which platform fits? Which monetisation model? What
maturity level is required? Which domain chapter applies? Explain why
the same decision framework produces different answers for each case.

What you're learning: How the decision system adapts to different contexts while maintaining the same logical structure. This comparison exercise demonstrates that the frameworks are tools for thinking, not templates for copying.

Prompt 3: Domain Research

The seven domains in Chapter 14 do not include education,
manufacturing, or real estate. Pick one of these unlisted domains and
apply the chapter's framework to it: What institutional knowledge is
at risk? Which monetisation model would fit? What maturity level would
an organisation need to deploy? Which of the seven listed domains is
the closest analogue, and what would need to change in the deployment
approach? This analysis will test whether the frameworks generalise
beyond the listed domains.

What you're learning: Whether the chapter's frameworks are genuinely general-purpose or limited to the seven listed domains. Applying the decision system to an unlisted domain is the strongest test of comprehension -- if the frameworks work, they should produce coherent answers for any domain with institutional knowledge at risk.

Flashcards Study Aid