Chapter Summary and Quick Reference
You began this chapter with a COO's observation: operations teams spend a disproportionate amount of their time managing the consequences of invisible problems. Problems that were always visible, if anyone had been watching.
Over fourteen lessons, you built the watching infrastructure. Every vendor in the portfolio is now visible, evaluated, and tracked. Every critical process is documented, owned, and version-controlled. Every compliance obligation is mapped to a control, an owner, and an evidence location. Every significant risk is scored, prioritised, and connected to an escalation threshold. Four persistent agents are watching the portfolio continuously — not because humans cannot manage it, but because the volume and frequency of what needs watching exceeds what any team can sustain manually.
The Central Insight
Operations is not primarily an administrative function. It is an intelligence function — its job is to make the invisible visible.
Vendor spend that nobody has totalled. Process steps that live in three people's heads rather than in a document. Compliance obligations that were tracked when they were introduced but whose controls have since drifted. Risks that everyone is vaguely aware of but nobody has formally quantified. Change impacts that seemed obvious to the team that requested the change and invisible to every other team it affected.
When the invisible becomes visible, decisions improve. The CFO who knows the full vendor portfolio finds the savings that were always there. The COO who knows the process gaps closes them before they cause failures. The Compliance Officer who knows every obligation ensures every one is met. The Change Manager who maps every impact prevents the downstream failure that nobody anticipated.
The two-plugin architecture — official Operations plugin for standard workflows, custom Operations Intelligence plugin for the gaps — does not run the organisation. It makes the organisation visible to the people who run it.
What This Chapter Built
| Capability | Plugin Command / Tool | Lesson |
|---|---|---|
| Vendor portfolio audit | /vendor-review | L03 |
| SLA scorecards + renewal calendar | /vendor-review | L03 |
| Contract obligation extraction | /contract | L04 |
| SOP library + process documentation | /process-doc + /runbook | L05 |
| Change impact assessment + rollback | /change-request | L06 |
| Compliance obligation map | compliance-tracking (auto-skill) | L07 |
| Audit evidence packs + mock review | /audit | L08 |
| Operational risk register | risk-assessment (auto-skill) | L09 |
| Incident post-mortem + Five Whys | /incident | L10 |
| Operational metrics framework | /metrics | L11 |
| Monthly operations status report | /status-report | L11 |
| Four persistent monitoring agents | vendor-watchdog, process-health, compliance-monitor, change-tracker | L12 |
| Operations intelligence brief | /metrics + /status-report + agent outputs | L13 |
| End-to-end operations sprint | All of the above | L14 |
What Does Not Change
Operations still requires judgment — and no amount of intelligence infrastructure changes that.
Deciding whether to exit a vendor relationship despite long history. Choosing how to communicate a difficult change to a team that will resist it. Making the call to rollback a system change at midnight when costs are mounting. Determining which compliance risk to accept and which to remediate immediately. These are judgment calls. They belong to people, not to plugins.
What the intelligence infrastructure changes is the quality of information available when those calls are made. The vendor with the underperformance data assembled. The change with the impact map complete. The compliance obligation with its evidence chain intact. The incident with its timeline already reconstructed from log data.
Better information, in the hands of the same people, produces better decisions. That is what operational intelligence does.
Quick Reference: Official Plugin Commands
| Command | Use | Lesson(s) |
|---|---|---|
/vendor-review | Vendor portfolio audit, SLA scorecards, renewal calendar, vendor comparison | L02, L03 |
/process-doc | Process documentation — SOPs, RACI matrices, flowcharts, gap analysis | L05 |
/runbook | Operational runbook creation and maintenance | L05 |
/change-request | Change impact assessment, communications plan, rollback design | L06 |
/status-report | Status reports with KPIs, risk summary, and action items | L11, L13 |
Quick Reference: Custom Plugin Commands
| Command | Use | Lesson(s) |
|---|---|---|
/audit | Audit preparation, evidence packaging, mock review, response framework | L02, L08 |
/contract | Contract obligation extraction, risk flagging, renewal strategy | L04 |
/incident | Post-mortem documentation, Five Whys RCA, corrective action tracking | L10 |
/metrics | Operational metrics framework design, dashboard structure, reporting templates | L11, L13 |
Quick Reference: Auto-Skills (Natural Language Only — Never Slash Commands)
| Auto-Skill | Trigger Keywords | Use | Lesson |
|---|---|---|---|
compliance-tracking | "compliance", "obligation", "regulatory", "control" | Map compliance obligations and assess controls | L07 |
risk-assessment | "risk", "risk register", "risk assessment", "mitigation" | Build risk registers and assess risk severity | L09 |
process-optimization | "optimize", "improve process", "bottleneck", "efficiency" | Identify process inefficiencies and improvements | L05 |
/compliance-tracking, /risk-assessment, and /process-optimization are not valid commands. These auto-skills activate from the keywords in your natural language prompts. Write a natural sentence containing the trigger keywords and the skill activates automatically.
Quick Reference: Persistent Agents
| Agent | Schedule | Monitors | Primary Output | Lesson |
|---|---|---|---|---|
| vendor-watchdog | Weekly | Renewals approaching, SLA breach flags, unapproved vendor spend, new vendor additions | Vendor alert digest | L12 |
| process-health | Monthly | SOP currency, orphaned processes, processes affected by recent changes or regulations | Process health report | L12 |
| compliance-monitor | Weekly | Obligation review dates, evidence aging, regulatory change alerts, control status | Compliance status report | L12 |
| change-tracker | Weekly | Change pipeline progress, impact assessment compliance, PIR completion, rollback triggers | Change status digest | L12 |
Key Framework: Risk Scoring Matrix
| Likelihood → Impact ↓ | Rare (1) | Unlikely (2) | Possible (3) | Likely (4) | Almost Certain (5) |
|---|---|---|---|---|---|
| Negligible (1) | 1 | 2 | 3 | 4 | 5 |
| Minor (2) | 2 | 4 | 6 | 8 | 10 |
| Moderate (3) | 3 | 6 | 9 | 12 | 15 |
| Major (4) | 4 | 8 | 12 | 16 | 20 |
| Critical (5) | 5 | 10 | 15 | 20 | 25 |
| Score | Band | Action Required |
|---|---|---|
| 1-4 | Low | Monitor; review annually |
| 5-9 | Medium | Mitigate; owner assigned; review quarterly |
| 10-16 | High | Immediate mitigation plan; escalate to COO; review monthly |
| 17-25 | Critical | Immediate action; board-level visibility; treatment plan within 30 days |
Key Framework: Change Classification
| Class | Impact Profile | Approval Authority | Notice Required |
|---|---|---|---|
| Standard | Routine, pre-approved; reversible within 1 hour | Operations Manager | None (log only) |
| Significant | Affects 1-2 systems or teams; tested rollback available | COO or delegate | 48 hours |
| Major | Multi-system or cross-functional impact; complex rollback | COO + CFO | 5 working days |
| Critical | Organisation-wide or irreversible; no reliable rollback | Board sign-off | 10 working days |
Key Framework: Compliance Status Codes
| Code | Meaning | Required Action |
|---|---|---|
| Current | Obligation met; control active; evidence complete and dated | Review at next scheduled cycle |
| Review | Obligation met; evidence approaching expiry or control due review | Schedule review within 60 days |
| Partial | Obligation partially met; one or more components incomplete | Owner assigned; remediation plan required |
| Gap | Evidence missing or control inactive; obligation materially at risk | Immediate escalation to CCO |
| Urgent | Regulatory deadline approaching with Gap or Partial status | Board-level visibility; external advice |
Key Framework: SOP Quality Standard
A well-formed SOP meets all five criteria:
| Criterion | Requirement |
|---|---|
| Specific | Every step names the action, the actor (role), the system, and the output |
| Owned | A named role owns each step; a named role owns the document |
| Controlled | Version number, effective date, and review date are visible on the document |
| Current | Reflects the process as it operates today, not how it operated at time of writing |
| Tested | At least one person not involved in writing has followed the SOP successfully |
Key Framework: Incident Post-Mortem Quality Test
A post-mortem is fit for purpose when it produces:
| Element | Quality Test |
|---|---|
| Timeline | Complete; sources cited; no gaps >30 minutes unexplained |
| Root causes | At least one systemic root cause (not just immediate cause); verified by Five Whys |
| Corrective actions | Each action is specific, time-bound, and assigned to a named owner |
| Process gap identified | At least one SOP or control that failed or was absent is named |
| Non-recurrence assurance | Actions address root causes, not just symptoms |
Try With AI
Reproduce: Apply what you just learned to a simple case.
Using Chapter 38's two-plugin architecture, route each of the following
operational tasks to the correct command, auto-skill, or agent. For each,
specify: (1) the tool to use, (2) whether it is an official command, custom
command, auto-skill, or agent, and (3) one sentence explaining why it
belongs to that tool and not another.
Tasks:
1. Audit all active vendor contracts to find those renewing in the next 90 days
2. Map all GDPR obligations for a UK professional services firm
3. Prepare evidence packs for an upcoming ISO 27001 surveillance audit
4. Write an SOP for the employee offboarding process
5. Run a post-mortem for a payroll system outage last week
6. Monitor vendor SLA compliance every week without manual intervention
7. Build a metrics framework for operational performance reporting
8. Assess the impact of migrating the CRM system to a new platform
What you are learning: Routing operational tasks to the correct tool is the synthesis skill this chapter has been building. A correct route for all eight tasks confirms you have internalized the two-plugin architecture and the zero-overlap principle.
Adapt: Modify the scenario to match your organisation.
Review your completed ops.local.md configuration file. For your specific
organisation — given its size, industry, jurisdiction, and regulatory profile:
1. Which two plugin commands will you use most frequently, and why?
2. Which of the four agents is most critical to configure first?
3. Which compliance status codes (Current / Review / Partial / Gap / Urgent)
best describe the current state of your compliance portfolio?
4. What is the single highest-value use of the operations intelligence layer
for your organisation in the next 30 days?
What you are learning: The value of an operations intelligence layer depends entirely on your organisation's specific profile. This prompt forces you to apply the chapter's frameworks to your actual context — which is where the real operational work happens.
Apply: Extend to a new situation the lesson didn't cover directly.
A colleague at a similar organisation has asked you to recommend whether
they should adopt the two-plugin operations architecture from Chapter 38.
They have three concerns:
1. "We already use a GRC tool for compliance — is there overlap with
the compliance tracking auto-skill?"
2. "We run change management through ServiceNow — does /change-request
conflict with that?"
3. "We don't have time to build ops.local.md from scratch. Can we still
get value from the plugins without it?"
Address each concern with a specific, practical answer. For concern 1 and 2,
explain whether the plugin replaces, complements, or is irrelevant to the
existing tool. For concern 3, explain the minimum viable ops.local.md and
what capability the organisation loses by skipping full configuration.
What you are learning: Real adoption decisions involve existing tooling, integration questions, and minimum viable configurations. This prompt tests whether you can translate the chapter's architecture into practical advice for an organisation with pre-existing operational systems — the real-world adoption context.