Skip to main content

Continuous Intelligence — Agents & Retrospectives

You have completed one full product management cycle for InsightFlow's Workflow Builder initiative. From the problem brief (L03) through the metrics review (L13), you have built a complete chain of PM artifacts: a discovery brief, user research synthesis, a competitive brief, a feature spec, a PRD, user stories, a roadmap, a prioritised backlog, a sprint plan, stakeholder updates, and a metrics review.

Now comes the work that most PM processes skip: closing the loop. Did the work actually solve the problem? Did you build what you said you would build? Were the metrics you tracked the right ones? And what would you do differently if you started again today with everything you now know?

This lesson has two parts:

Part 1: Retrospective — Use the /retro command from the custom product-strategy plugin to run a structured four-question retrospective on Sprint 1 of the Workflow Builder initiative.

Part 2: Deploy Three Persistent Agents — Deploy the research intelligence, stakeholder update, and roadmap coherence agents to automate the ongoing intelligence and communication layer — so that the next cycle starts with better inputs than this one.

Part 1: Product Retrospectives

Why Most Retros Are Useless

The standard retrospective produces a whiteboard with three columns (what went well / what did not / what to improve), a list of action items without owners, and no connection to future product decisions. It feels thorough. It rarely changes anything.

The /retro command structures retrospectives around four questions that connect past decisions to future behaviour:

QuestionWhat It AsksWhat Good Looks Like
Q1: Did it solve the problem?Were we right about the user need? Did the feature change what it was supposed to change?Clear verdict with evidence: SOLVED / PARTIALLY SOLVED / NOT SOLVED / TOO EARLY TO TELL
Q2: Did we build it as intended?Was the spec good enough? Did engineering deliver what was specified?Assessment against delivery data: on time, scope maintained, ACs met
Q3: Were the metrics right?Were the success metrics measuring the right things? Did we have a failure threshold?Metric quality rating: HIGH / MEDIUM / LOW, with specific learning
Q4: What would we do differently?Not "what went wrong" — what specific process change would you make if you started today?Specific, testable process improvement — not a vague intention

Data Required Before Running a Retro

A retro without data produces opinions, not learning. Before invoking /retro, collect:

Outcome data (4-12 weeks post-launch):

  • Adoption: % of target users who used the feature at least once
  • Engagement: how often, for how long (if applicable)
  • Support impact: did tickets in this area increase or decrease?
  • NPS impact: any movement on relevant NPS questions?
  • Business metric: did the primary success metric move?

Delivery data:

  • On time or late? By how long?
  • Were there scope changes during development?
  • Did engineering flag spec ambiguity issues?

Team data (optional but valuable):

  • Did engineering flag spec quality issues during the sprint?
  • Did design flag requirement gaps?
  • Did CS flag anything unexpected post-launch?

Process Improvement Specificity: The STRONG vs. WEAK Rule

The most common retro failure is producing vague process improvements that no one can act on.

WEAK (Do Not Write)STRONG (Write This Instead)
"Write better specs""Every AC must be independently testable. If an AC contains 'and', split it before REVIEW status."
"Communicate better with engineering""Engineering lead must confirm architecture notes in PRD before PRD moves to REVIEW status."
"Define success metrics earlier""Primary metric and failure threshold must be in the spec before it enters sprint planning. No spec without metrics."
"Improve planning accuracy""Any ticket estimated >8 points must be broken into smaller items before sprint commitment. If it cannot be broken, the estimate must be reviewed with the engineering lead."

The test for a strong process improvement: can you tell someone exactly what action to take, at what step in the workflow, before they would otherwise have proceeded? If yes, it is specific enough to change behaviour.

Worked Example: InsightFlow Sprint 1 Retro

Here is the /retro invocation using illustrative Sprint 1 outcome data:

/retro

Feature: Workflow Builder Sprint 1 — Trigger Configuration UI
Shipped: 2026-04-03 | Retro date: 2026-04-17 (2 weeks post-launch)

Original problem (from L03 brief): InsightFlow users at 100-500 person
companies cannot automate repetitive data workflows without writing code.
This creates reliance on data engineers and delays that block business
velocity.

Target outcome (from L06 spec): 80% of Workflow Builder beta users
complete a trigger configuration within 15 minutes of first session.
Support tickets about "can't automate" requests: reduce by 30%.

Outcome data (2 weeks post-launch):
- Beta users: 47 (of 60 invited accepted)
- 71% completed trigger configuration in first session (target: 80%)
- Median time to configure: 18 minutes (target: 15 minutes)
- Support tickets about automation requests: reduced from 23/week to 14/week
(-39% reduction — exceeds target)
- NPS on trigger config experience: 6.4/10 (no pre-launch baseline defined — gap)
- 3 support tickets about UI confusion on the "step connector" section

Delivery data:
- Delivered: 1 day late vs. sprint plan (auth review meeting delayed WF-003 start)
- Scope: WF-007 (admin audit log) deferred to Sprint 2 (capacity constraint — documented in ADR-003)
- Spec ambiguity: engineering flagged 1 ambiguity (error state copy — resolved in sprint)
- Bugs in first 2 weeks: 2 minor, 0 P0

Team data:
- Engineering reported spec quality as HIGH for WF-001 and WF-002;
MEDIUM for WF-003 (error state handling was underspecified)
- Design flagged: "step connector" UI needs copy revision (matches 3 support tickets)

Sample output from /retro:

PRODUCT RETROSPECTIVE: Workflow Builder Sprint 1 — Trigger Config UI
Shipped: 2026-04-03 | Retro date: 2026-04-17 | PM: [Name]
════════════════════════════════════════════════════════════

Q1: DID IT SOLVE THE PROBLEM?

Original problem: Users cannot automate workflows without writing code.
Target outcome: 80% beta users configure trigger in 15 min;
30% reduction in automation support tickets.

Evidence:
✅ Support ticket reduction: -39% (exceeds 30% target)
⚠️ Trigger config completion: 71% (target: 80% — 9pp short)
⚠️ Median session time: 18 min (target: 15 min — 3 min over)
⚠️ 3 support tickets about step connector UI confusion (new issue)
❌ Pre-launch NPS baseline not defined — cannot compare post-launch 6.4

VERDICT: PARTIALLY SOLVED
The feature reduced automation support requests decisively (-39%). But
it did not fully solve the onboarding experience problem — 29% of beta
users did not complete configuration, and those who did took 20% longer
than target. The step connector UI is a friction point that requires
a fix in Sprint 2.

Q2: DID WE BUILD IT AS INTENDED?

✅ Delivered: 1 day late (auth review meeting scheduling — minor)
✅ Scope maintained: WF-007 deferred by decision (ADR-003), not by surprise
✅ Spec quality: HIGH for WF-001 and WF-002
⚠️ Spec quality: MEDIUM for WF-003 — error state handling underspecified;
resulted in copywriting ambiguity. Engineering resolved mid-sprint.
✅ Bug quality: 2 minor bugs, 0 P0 in first 2 weeks
✅ ACs: all met except error state copy (resolved in sprint)

Q3: WERE THE METRICS RIGHT?

Trigger completion rate (80% target):
Measurable: ✅ | Sensitive: ✅ | Right thing: ✅
Failure threshold defined? ✅ (below 80% = not launch-ready)
Quality: HIGH

Support ticket reduction (30% target):
Measurable: ✅ | Sensitive: ✅ | Right thing: ✅
Quality: HIGH

NPS on trigger experience (no baseline):
Measurable: ✅ | Sensitive: ✅ | Right thing: ✅
Failure threshold defined? ❌ — no pre-launch baseline set
Quality: MEDIUM — got 6.4 post-launch but cannot compare to anything

Median session time (15 min target):
Measurable: ✅ | Right thing: ⚠️ — time-on-task is a proxy for ease,
but 18 minutes might be fine if users succeed confidently. Better metric:
task completion confidence score (post-session survey).
Quality: MEDIUM — consider replacing with confidence score

METRIC LEARNING:
1. Pre-launch NPS baseline must be defined before launching any feature
with a satisfaction target. "NPS will improve" is not measurable
without a starting point.
2. Time-on-task is not always a good proxy for ease. Consider
post-task confidence score for usability evaluation.

Q4: WHAT WOULD WE DO DIFFERENTLY?

Process improvement 1 — Error state specification:
What happened: WF-003 error state handling was underspecified.
Engineering flagged mid-sprint; resolved with PM.
Added delay and ambiguity to sprint.
What to change: Error state behaviour must have copywriter-reviewed
copy in spec before REVIEW status.
How to apply: On next feature: add "error states and copy" as a
required section in every UI spec. Block REVIEW
status if section is empty.

Process improvement 2 — Pre-launch metric baselines:
What happened: NPS not baselined before launch. Post-launch
score (6.4) has no context.
What to change: Every feature with a satisfaction target must
baseline that metric before sprint planning begins.
How to apply: Add to spec template: "Baseline measurements"
section. Required field before REFINED status.

FOLLOW-ON ACTIONS
1. Fix step connector UI copy — Sprint 2, P1 — Design + PM — by 2026-04-24
2. Define NPS baseline for trigger config (survey current users) — PM — by 2026-04-21
3. Review 3 support tickets about step connector — identify top UX issue — PM — by 2026-04-18
4. Consider WF-003 v2: redesign step connector for sub-15-min completion — Sprint 3 candidate

PRODUCT.LOCAL.MD UPDATES RECOMMENDED
Add to quality standards:
- "Error states and copy" is required in every UI spec before REVIEW status
- "Baseline measurements" section required in spec before sprint planning
════════════════════════════════════════════════════════════

Evaluating the Retro Output

Before closing the retrospective, check against the /retro skill's NEVER DO rules:

NEVER run a retro without outcome data — this retro has adoption data (47 users, 71% completion), delivery data (1 day late, spec quality ratings), and support data. If you are missing outcome data, the retro should wait.

NEVER produce only what went wrong — the retro above includes clear ✅ signals: support ticket reduction exceeded target, delivery was nearly on time, spec quality was HIGH for WF-001/002, zero P0 bugs. Protecting what works is as important as fixing what does not.

NEVER close without a product.local.md update — the retro above adds two specific rules to quality standards. If nothing changes in product.local.md after a retro, the retro was a review exercise, not a learning exercise.

Part 2: Deploying Three Persistent PM Agents

One-off commands give you answers when you ask. Persistent agents give you intelligence continuously — monitoring signals you would miss, drafting communications you would not have time to write, and flagging drift before it becomes a crisis.

The three PM agents in the custom product-strategy plugin form a continuous intelligence layer:

AgentWhat It WatchesWhen It RunsWhat It Produces
Research IntelligenceUser signals: support, NPS, feature requestsWeekly (Monday morning)Research digest with escalation flags
Stakeholder UpdateProduct status changesWeekly (Friday) + triggered by status changesThree-version update queue for PM review
Roadmap CoherenceBacklog vs. roadmap vs. sprint alignmentWeeklyCoherence report with required PM actions

Agent 1: Research Intelligence Agent

Purpose: Monitor user signals across all feedback channels and surface emerging patterns before they become crises.

Weekly workflow:

  1. Pull support ticket themes from the last 7 days — categorise, compare to prior week, flag any category up >50%
  2. Pull NPS detractor verbatim — find themes appearing in >20% of detractor responses
  3. Pull feature request votes — sort by total votes + velocity, flag any crossing a vote threshold
  4. Cross-channel detection: any problem appearing in support AND NPS AND feature requests = systemic issue, escalate immediately

Escalation thresholds:

  • Any single support theme >20% of weekly ticket volume, not seen before → generate a problem brief
  • Any safety or security issue → immediate escalation
  • Any NPS score below configured threshold from a named enterprise account
  • Any theme appearing in all three channels simultaneously

Output format:

RESEARCH WEEKLY DIGEST — Week of [Date]
HEADLINE: [One sentence: the most important user signal this week]

Signal 1: [Theme] — 🔴 ESCALATE / 🟡 MONITOR / 🟢 NOTED
Source: [Support: N / NPS: N / Requests: N]
Pattern: [What users are experiencing]
Quote: "[Representative verbatim]"
Trend: [NEW / Week N of ongoing / Spike vs prior]
Recommended action: [Specific next step]

NEVER DO rules for this agent:

  • Never surface a single user complaint as a signal — one complaint is noise; three is a pattern; five is a signal
  • Never omit the trend comparison — a declining signal is different from a growing one
  • Never produce a digest without a recommended action per signal
  • Never mark a cross-channel signal as low priority

Agent 2: Stakeholder Update Agent

Purpose: Ensure the right people always have the right version of product status. No stakeholder should discover important product news through the grapevine.

Trigger conditions:

  • Scheduled (every Friday): Generate three-version weekly update from current product status
  • Triggered any time: Feature status changes from ON TRACK → WATCH ITEM or AT RISK, feature ships, customer-committed feature changes status, sprint scope changes affecting a roadmap commitment

Weekly workflow:

  1. Pull status of all active features from project tracking (MCP: Jira/Linear/Notion)
  2. Compare to prior week — detect status changes, shipped items, new risks
  3. Generate three versions (executive, engineering, customer-facing)
  4. Queue all three for PM review and approval — never auto-send

The PM review gate is not optional. Every version waits for PM approval before distribution. The agent removes the mechanical work of generating three versions. The PM removes errors, adjusts tone, and approves distribution. This division preserves stakeholder trust while saving the PM's time.

Triggered update — customer-committed feature: When any feature with a customer commitment changes status, the agent immediately generates a draft customer communication and flags it as "review required" — not sent, but queued with high urgency.

NEVER DO rules for this agent:

  • Never auto-send any communication without PM review and approval
  • Never send the same version to different audiences
  • Never omit triggered updates when customer-committed features change status

Agent 3: Roadmap Coherence Agent

Purpose: Maintain coherence between the roadmap, backlog, specs, and sprint. Surface drift before it becomes a missed quarterly commitment.

Three weekly checks:

Check 1 — Backlog Orphan Detection Any backlog item added in the last 7 days without a roadmap theme tag gets flagged to the PM: "New item without roadmap theme — assign to a theme or mark as maintenance/tech debt."

Check 2 — Roadmap Coverage Every NOW (current quarter) roadmap item is checked for spec status:

  • REFINED spec ✅ — ready for sprint
  • DRAFT spec ⚠️ — not sprint-ready, PM to move to REVIEW
  • NO SPEC ❌ — cannot enter sprint planning, immediate flag

Check 3 — Sprint Alignment Current sprint's story points are classified:

  • Roadmap-tagged: on track
  • Maintenance/bug fixes: expected (acceptable up to 20%)
  • Untagged/off-roadmap: alert if >30%

If off-roadmap work exceeds 30%, the agent alerts PM and EM: "Current sprint has X% off-roadmap work. At this rate, Y roadmap items will miss their target."

NEVER DO rules for this agent:

  • Never classify maintenance or bug fixes as "off-roadmap" — they are expected and separately tracked
  • Never flag every untagged item as a crisis — surface them weekly for PM triage
  • Never enforce 100% roadmap alignment — some sprint flexibility is healthy; the threshold is >30%, not >0%
  • Never produce a report without recommended actions

Deploying Agents with /schedule

Agents in Cowork can be deployed as persistent, scheduled components using the /schedule command. Each agent runs on its configured schedule and deposits its output in a review queue for PM action.

Deploying the Research Intelligence Agent:

/schedule

Agent: research-intelligence (from product-strategy plugin)
Name: "InsightFlow Weekly Research Digest"
Schedule: Every Monday at 8:00 AM

Configuration:
Product: InsightFlow (B2B SaaS analytics)
Data sources:
Support: [Intercom — connect via MCP]
NPS: [Delighted — connect via MCP if available; otherwise manual input]
Feature requests: [Canny — connect via MCP if available]

Escalation thresholds:
- Any single support theme >20% of weekly volume AND not seen before → generate /brief
- Any theme appearing in support + NPS + feature requests simultaneously → escalate immediately
- Enterprise account NPS below 5 → immediate escalation

Output: Deposit weekly digest in [Cowork folder: /pm-inbox/research/]
PM action required: Review digest Monday; act on red signals before Tuesday standup.

Deploying the Stakeholder Update Agent:

/schedule

Agent: stakeholder-update (from product-strategy plugin)
Name: "InsightFlow Weekly Stakeholder Update Queue"
Schedule: Every Friday at 2:00 PM

Configuration:
Status source: [Linear — connect via MCP]
Distribution:
Executive version: Sarah Chen (CEO), David Park (CPO)
Engineering version: Maria Santos (CTO) + engineering Slack channel
Customer version: James Wilson (Head of CS) for distribution

PM review gate:
All versions queue in [Cowork folder: /pm-inbox/updates/]
PM must approve before distribution — no auto-send.
Trigger review alert to PM at 3:00 PM Friday.

Triggered updates: Any customer-committed feature status change
→ immediate draft, flag as urgent, alert PM and CS.

Deploying the Roadmap Coherence Agent:

/schedule

Agent: roadmap-coherence (from product-strategy plugin)
Name: "InsightFlow Weekly Roadmap Coherence Check"
Schedule: Every Wednesday at 9:00 AM

Configuration:
Backlog source: [Linear — connect via MCP]
Roadmap source: [Notion roadmap — connect via MCP if available]
Sprint window: Current 2-week sprint

Escalation thresholds:
- Backlog orphan (no roadmap tag): flag in weekly report, PM triage by Thursday
- NOW item with no spec AND <3 sprints to target: immediate flag
- Sprint >30% off-roadmap: immediate alert to PM + Maria Santos (CTO)
- Sprint >50% off-roadmap: urgent — roadmap is effectively paused

Output: Deposit coherence report in [Cowork folder: /pm-inbox/coherence/]
PM action required: Review and act on required actions by Thursday EOD.
Keep This File

The process improvement rules from your retro, and the escalation thresholds you configure for the three agents, should be saved in your product.local.md. These agents work best when product.local.md is current — it provides the context (team structure, stakeholder map, customer commitments) that makes each agent's output specific to your product rather than generic.

The Full PM Cycle, Automated

Looking back at the InsightFlow journey from L03 to L14, here is where the three agents would have intervened:

LessonWhat HappenedWhich Agent Would Have Helped
L03Identified the workflow automation problemResearch Intelligence: ongoing signal monitoring might have surfaced this earlier
L04-L05User research + competitive briefResearch Intelligence: NPS verbatim and feature requests supplement interviews
L11Sprint planning with capacity constraintsRoadmap Coherence: would have flagged WF-007's audit log as a risk before the sprint
L12Three stakeholder updates written manuallyStakeholder Update Agent: generates all three versions automatically on Friday
L13Monthly metrics review revealed activation dropResearch Intelligence: weekly digest might have caught friction signals earlier
L14Retro found missing NPS baselineRoadmap Coherence: would have flagged "no baseline measurement" in spec coverage check

Agents do not replace the judgment in any of these steps. They give you better inputs, faster — so your judgment is informed by richer data.

Try With AI

Use these prompts in Cowork or your preferred AI assistant.

Prompt 1 — Reproduce (apply what you just learned):

Run a product retrospective on InsightFlow's Workflow Builder Sprint 1.

Feature: Trigger Configuration UI
Shipped: 2026-04-03 | Retro: 2026-04-17

Outcome data:
- Beta users: 47 of 60 invited
- Trigger config completion rate: 71% (target: 80%)
- Median session time: 18 minutes (target: 15)
- Automation support ticket reduction: -39% (target: -30%) ✅
- NPS on trigger experience: 6.4/10 (no pre-launch baseline)
- 3 support tickets about step connector UI confusion

Delivery data:
- 1 day late (auth review meeting scheduling)
- Spec quality: HIGH for WF-001/002, MEDIUM for WF-003 (error states underspecified)
- 2 minor bugs, 0 P0

Answer the four retro questions. For each "what went wrong",
produce a STRONG process improvement (specific rule), not a weak one
("communicate better").

Close with two product.local.md updates.

What you're learning: Practising the full /retro workflow with real data from the sprint you planned in L11. The evaluation test is whether the process improvements are specific enough to change behaviour — "be more careful" fails, "add error state copy as required spec section before REVIEW status" passes.

Prompt 2 — Adapt (change the context):

A different team ran a retrospective on their SSO feature launch.
They produced this process improvement:
"We need to communicate better with the platform team about dependencies."

This is WEAK. Rewrite it as a STRONG process improvement using this
format:
What happened: [specific incident]
What to change: [specific rule or step to add]
How to apply: [on next feature, do exactly X at step Y]

Then: What is the product.local.md update you would add from this
retro finding?

What you're learning: The specificity conversion exercise — taking a real-world vague retro output and making it actionable. This is the hardest part of running retros well.

Prompt 3 — Apply (connect to your domain):

Think of a feature you or your team shipped in the last 3-6 months.

Answer the four retrospective questions:
1. Did it solve the problem? (State the original problem and what the
outcome data shows — use data, not opinion)
2. Did you build it as intended? (Was the spec accurate? Delivered on time?)
3. Were the metrics right? (Were the success metrics measuring the right
things? Did you have a failure threshold?)
4. What would you do differently? (Write one STRONG process improvement —
specific rule, specific step, specific action on next feature)

Then: which of the three PM agents (Research Intelligence, Stakeholder
Update, Roadmap Coherence) would have caught the issue earlier if it
had been deployed before this feature shipped?

What you're learning: The full retro applied to real experience. The final question connects retrospective learning to agent configuration — what escalation thresholds would you set, given what this retrospective taught you?

Exercise: Sprint 1 Retro + Agent Deployment

Plugin: Custom product-strategy Commands: /retro + /schedule (for agent deployment) Time: 45 minutes

Step 1 — Gather your data

Before running /retro, compile:

  • Original problem (from your L03 brief)
  • Target outcome (from your L06 spec)
  • Sprint 1 outcome data (from your L13 metrics review — activation, support tickets, NPS if available)
  • Delivery assessment from L11 (on time? scope maintained? spec quality?)

Step 2 — Run /retro on Sprint 1

/retro

Feature: Workflow Builder Sprint 1 — Trigger Configuration UI
Shipped: 2026-04-03 | Retro: 2026-04-17

Original problem: [from your L03 brief]
Target outcome: [from your L06 spec acceptance criteria]

Outcome data:
[Paste your L13 metrics that relate to Sprint 1 — adoption, support
ticket change, NPS if available]

Delivery data:
[From your L11 sprint plan — on time? scope changes? spec quality
issues flagged by engineering?]

Step 3 — Evaluate the retro output

Verify:

  • Does the verdict on Q1 (Did it solve the problem?) cite specific data, not opinions?
  • Does Q3 (Were the metrics right?) include a quality rating (HIGH/MEDIUM/LOW) and a specific learning?
  • Do all process improvements in Q4 pass the STRONG test? (Can you state exactly what action to take, at what step, before you would otherwise have proceeded?)
  • Is the product.local.md update section present with at least one specific rule?

If any process improvement is weak, prompt: "Rewrite process improvement [N] as a specific, testable rule in this format: On the next feature, at [step], the PM must [action] before [proceeding]."

Step 4 — Deploy all three agents

Using /schedule, deploy each agent with InsightFlow-specific configuration:

/schedule

Deploy the following InsightFlow PM agents:

Agent 1: research-intelligence
Schedule: Every Monday at 8:00 AM
Config:
Product: InsightFlow (B2B SaaS analytics, Series B)
Escalation: >20% single theme, cross-channel (all three) = immediate
Output: /pm-inbox/research/

Agent 2: stakeholder-update
Schedule: Every Friday at 2:00 PM
Config:
Audiences: Sarah Chen (CEO), David Park (CPO) / CTO + engineering / CS team
PM review gate: enabled — no auto-send
Triggered: any customer-committed feature status change
Output: /pm-inbox/updates/

Agent 3: roadmap-coherence
Schedule: Every Wednesday at 9:00 AM
Config:
Backlog source: Linear
Orphan threshold: any new item without roadmap tag
Sprint alert: >30% off-roadmap = PM + EM alert
Immediate alert: NOW item, no spec, <3 sprints to target
Output: /pm-inbox/coherence/

Step 5 — Verify first output

After deploying each agent, prompt it to produce its first output:

Research Intelligence Agent: produce this week's research digest
for InsightFlow. Simulate data: 12 support tickets (4 about dashboard
load time — up from 2 last week), 3 NPS detractor responses mentioning
"too slow", 1 feature request for performance improvements (15 new votes).

Stakeholder Update Agent: produce this week's stakeholder update queue
based on Sprint 1 status (Yellow → now Green: trigger config UI shipped,
step connector fix scoped for Sprint 2).

Roadmap Coherence Agent: produce this week's coherence report.
Simulate: Sprint 2 has WF-004 (roadmap-tagged), WF-003 v2 fix
(roadmap-tagged), and 2 untagged items from customer support requests.

Evaluate each agent's output against its NEVER DO rules.

What You Built

You completed the final loop of InsightFlow's product management cycle. Part 1 produced a four-question retrospective on Sprint 1 — with a clear verdict on problem-solving, a delivery assessment, a metric quality evaluation, and at least two specific process improvement rules that update product.local.md. Part 2 deployed three persistent agents that automate the ongoing intelligence and communication layer: the Research Intelligence Agent monitors user signals weekly, the Stakeholder Update Agent generates three-version updates for PM review, and the Roadmap Coherence Agent checks three dimensions of backlog and sprint alignment.

Together, these agents mean the next product cycle starts with better inputs: current user signal intelligence, consistent stakeholder communications, and proactive drift detection — rather than discovering problems retrospectively.

Flashcards Study Aid


Continue to Lesson 15: Chapter Summary & Quick Reference →