The Revenue Engine Sprint
Thirteen lessons. Research, scoring, outreach, sequences, briefs, content, campaigns, analysis, compliance, agents, dashboards. Every piece of the revenue engine is built. This is the sprint that proves it works.
No new concepts in this lesson. Every skill, every command, every diagnostic framework appeared in Lessons 1 through 13. What changes here is the mode of operation. In prior lessons you ran individual components and evaluated their output in isolation. In this sprint you assemble the complete engine and run it against five fresh prospects, a full campaign, and the revenue dashboard — all in a single session. The clock is running. Errors compound across stages. Your job is to execute, evaluate, and correct in real time.
The sprint has three parts. Part A builds the research-to-meeting pipeline for five new prospects. Part B constructs a campaign with a content calendar. Part C produces the revenue dashboard. Four exercises are required. Three are bonus extensions. Complete the required exercises first, then take on the bonus work if time permits.
If you have 30 minutes instead of 45, complete Exercises 1, 2, 4, and 7. These four cover the core loop: validate your ICP, sprint through 5 prospects, build a campaign brief, and produce a dashboard. The three bonus exercises deepen specific areas but are not required to demonstrate the end-to-end workflow.
Part A: The Research-to-Meeting Sprint
This is the core of the revenue engine. Five prospects enter. Research briefs, lead scores, ranked priorities, outreach messages, and a call summary come out the other end. Time target: 15 minutes for the required exercises.
Exercise 1: Validate Your ICP (Required)
You built NexaFlow's ICP in Lesson 2. Thirteen lessons later, you have run dozens of research briefs and scoring operations against it. Calibration drifts. Validate it now before using it to score five new prospects.
Your task:
Pull five closed-won deals from NexaFlow's pipeline (use the demo data you generated in Lesson 1, or generate five closed-won companies now). Score each against your current ICP using the lead-scoring skill.
Score these 5 closed-won companies against our current ICP:
1. DataStream Logistics, Lahore (closed $45K, 6-month deal cycle)
2. Gulf Express LLC, Dubai (closed $120K, 3-month deal cycle)
3. QuickHaul Karachi (closed $28K, 2-month deal cycle)
4. Nordic Supply Chain, Stockholm (closed $85K, 4-month deal cycle)
5. Atlas Freight, London (closed $150K, 5-month deal cycle)
All 5 should score 60+ since they already bought. If any score
below 60, the ICP has drifted — identify which dimension is
miscalibrated.
What to look for: Every closed-won deal should score above 60. If DataStream scores 52 because the ICP's revenue threshold is set at $10M and DataStream is a $3M company — but they bought anyway — your revenue threshold is filtering out real buyers. That is Miscalibrated Scoring from Lesson 3: the model penalises a dimension that does not predict buying behaviour.
If calibration has drifted: Edit the offending ICP section in sales-marketing.local.md. Re-score. Confirm all five closed-won deals now score above 60 without loosening criteria so broadly that obviously poor-fit companies also pass.
Exercise 2: The Research and Outreach Sprint (Required)
Five new prospects. Full pipeline from research through outreach. This is the engine running at speed.
Step 1 — Generate or select 5 prospects:
If you have real prospects from your own pipeline, use them. Otherwise, generate five fresh companies in NexaFlow's target market:
Generate 5 new logistics technology prospects for NexaFlow:
- 2 in Pakistan (Karachi or Lahore)
- 1 in UAE (Dubai)
- 1 in UK (London)
- 1 in any other market
For each, provide: company name, HQ city, employee count,
primary logistics service, and one recent business event
(funding, expansion, executive hire, or technology change).
Step 2 — Research and score all 5:
Research each company using the prospect-research skill, then score each with the lead-scoring skill. As research briefs arrive, flag any claims you cannot verify — funding amounts, employee counts, technology stack details. Mark each flagged claim as Hallucinated Data or Verifiable. Do not stop the sprint to verify. Flag and continue.
Step 3 — Rank and select top 3:
Rank all five by composite score. Select the top 3 for outreach.
Step 4 — Generate outreach for the top 3:
For each of the top 3 prospects, generate Five Laws-compliant outreach using the outreach skill. After each message is generated, run a quick Five Laws audit:
| Law | Pass? | Evidence |
|---|---|---|
| Specific reference to prospect's business | ||
| Prospect-first (their pain, not your product) | ||
| Single ask | ||
| Under word limit | ||
| No jargon |
If any message fails a law, fix it before moving on.
Step 5 — Call summary for the #1 prospect:
Take your highest-scoring prospect. Run /call-summary using this hypothetical meeting data:
Generate a call summary for a 25-minute discovery call with
[#1 prospect name]. Key meeting findings:
- They confirmed the pain point from the research brief
- Budget exists but requires VP approval above $5K/month
- Current solution is manual spreadsheets + one legacy tool
- Timeline: decision by end of Q2
- Champion: [operations manager name from research brief]
- Next step: technical demo in 2 weeks
Review the call summary. Does it reference specific findings from the research brief? Or is it generic? If it reads like "Great meeting, they have budget and a timeline" without referencing the specific pain point, technology stack, or champion name from the research — that is Context Loss. The intelligence existed. The summary did not use it.
Deliverable: 5 research briefs, 5 lead scores with ranking, 3 outreach messages with Five Laws audits, 1 call summary. Flagged errors noted inline.
Exercise 3: Ghostwrite a Full Sequence (Bonus)
Pick the #1 prospect from Exercise 2. Build a complete outreach sequence:
Build a 6-touch, 21-day outreach sequence for [#1 prospect].
Mixed channels: email (touches 1, 3, 5), LinkedIn (touches 2, 4),
phone (touch 6).
Each touch must:
- Reference a specific finding from the research brief
- Build on the previous touch (not repeat it)
- Include an exit condition (what signal means STOP)
After the sequence is generated, evaluate each touch against the Five Laws. A common failure: touches 4 through 6 become generic because the agent runs out of specific research findings to reference. Touch 1 says "I noticed your Series A announcement last month." Touch 5 says "I wanted to follow up on my previous message." That quality decay across the sequence is a form of Context Loss — the research intelligence was consumed by early touches and not replenished for later ones.
Also check for Over-Automation: does the sequence include exit conditions? If the prospect responds "Not interested" after Touch 2, does Touch 3 still fire? If there are no stop rules defined, the sequence will continue past a negative signal. Add exit conditions if they are missing:
exit_conditions:
- trigger: "Prospect declines or requests removal"
action: "Stop sequence immediately"
- trigger: "Prospect books a meeting"
action: "Stop sequence, transition to pre-call brief"
- trigger: "No response after all 6 touches"
action: "Move to nurture cadence (monthly)"
Part B: The Campaign and Content Engine
The research-to-meeting pipeline handles individual prospects. The campaign engine handles markets. Time target: 10 minutes for the required exercise.
Exercise 4: Full Campaign Brief (Required)
Build a campaign for NexaFlow targeting logistics operations leaders.
/campaign-plan
Goal: Generate 50 HOT leads for NexaFlow's logistics data platform
Audience: VP Operations + COO at mid-market logistics companies
(100-500 employees)
Geography: Pakistan, UAE, UK
Budget: $25,000
Timeline: 12 weeks
Channels: LinkedIn, email, content marketing, 1 regional event
Review the campaign brief. Three things to evaluate:
Channel allocation realism. If the agent allocates $8,000 to "a logistics trade show in Karachi," verify that number. A booth at ITCN Asia or a logistics-specific conference may cost $3,000 to $15,000 depending on the event. If the budget assumption is fabricated, that is Hallucinated Data embedded in your campaign plan — every downstream decision built on that budget allocation inherits the error.
Audience sizing. The brief should estimate total addressable audience. If it says "approximately 2,400 VP Operations at mid-market logistics companies across Pakistan, UAE, and UK" — is that number sourced or invented? LinkedIn Sales Navigator can verify this count. If the number is fabricated, your CPL projections are built on fabricated denominators.
Content calendar. After the campaign brief is generated, build the content calendar:
Build a 12-week content calendar for this campaign.
Week-by-week schedule. For each week, specify:
- Content piece (blog post, case study, LinkedIn post, etc.)
- Channel
- Target persona (VP Ops or COO)
- Call to action
- Tie to campaign goal
Deliverable: Campaign brief with budget allocation, 12-week content calendar, and at least one flagged data point that requires verification.
Exercise 5: Content Factory — 10 Assets (Bonus)
Take the campaign's cornerstone asset — a blog post or whitepaper on logistics data infrastructure. Multiply it into 10 derivative assets:
Take this cornerstone asset and produce 10 derivative pieces:
1. LinkedIn post (conversational, first-person)
2. Email newsletter section (3 paragraphs, single CTA)
3. X/Twitter thread (5-7 tweets)
4. Short video script (60 seconds)
5. Infographic outline (5 data points + visual flow)
6. Webinar abstract (title, 3 learning objectives, speaker bio)
7. Sales one-pager (ROI-focused, 1 page PDF layout)
8. Podcast talking points (10 questions for a 20-min episode)
9. Case study outline (problem, solution, results framework)
10. Executive summary (150 words for board-level audience)
After generating all 10, run /brand-review on three key pieces — the LinkedIn post, the sales one-pager, and the executive summary.
What to evaluate: Are the 10 pieces genuinely different formats, or are they the same 300 words with different headers? The LinkedIn post should use conversational tone. The sales one-pager should lead with numbers. The executive summary should be dense and jargon-appropriate for a board audience. If all ten read like the blog post reformatted, the multiplication produced quantity without quality.
Brand voice consistency: Does the LinkedIn post say "leverage our platform" while your brand voice in sales-marketing.local.md forbids "leverage"? Does the one-pager use superlatives that your brand guidelines prohibit? If the generating skills did not consume your brand voice configuration, that is Context Loss.
Part C: The Revenue Dashboard
The pipeline handles prospects. The campaign handles markets. The dashboard handles the business. Time target: 5 minutes for the required exercise.
Exercise 6: Pipeline Health Audit (Bonus)
Run a pipeline health check using NexaFlow's pipeline data from earlier lessons.
/pipeline-review
Run a deal health analysis on NexaFlow's current pipeline.
For each deal in proposal or later stages, evaluate:
- Days in current stage
- Last activity date
- Champion status (identified / engaged / at risk)
- Competitive threat level
- Probability assessment (agent vs CRM probability)
Then run a three-scenario forecast:
/forecast
Generate a 3-scenario revenue forecast for NexaFlow Q2 2026:
- Best case: all deals close at current probability
- Likely case: apply historical close rate to each stage
- Worst case: remove any deal with no activity >30 days
From the forecast output, produce deal health briefs for the top 3 opportunities. Each brief should include: the deal name, current stage, probability assessment, risk factors, and the one action most likely to advance the deal this week.
What to evaluate: Compare the agent's probability assessment against the CRM's stored probability. If the CRM says Crescent Freight is at 60% but the agent says 35% because the deal has been in Proposal stage for 34 days with no activity — which assessment is more accurate? The CRM stores the rep's subjective estimate. The agent applies objective criteria. Neither is always right. The value is in the gap between them. A 25-point gap means someone needs to investigate.
Exercise 7: The RevOps Dashboard (Required)
Configure the Revenue Reporting Agent from Lesson 13 with all metrics, then produce two outputs.
Output 1 — Weekly dashboard:
Configure the revenue-reporting-agent for NexaFlow with these metrics:
Pipeline metrics:
- Total pipeline value and weighted pipeline
- Deals by stage (count and value)
- Average days in each stage
- Deals at risk (>30 days without advancement)
Activity metrics:
- Meetings held this week
- Outreach messages sent
- Response rate by channel
Forecast metrics:
- Quarterly target vs weighted pipeline
- Gap to target
- Top 3 deals most likely to close this quarter
Produce the weekly dashboard for the week of March 10-14, 2026.
Output 2 — Executive email:
Generate a weekly executive email for NexaFlow's CEO.
Requirements:
- Maximum 5 bullets
- Maximum 150 words
- Lead with the single most important pipeline change this week
- Include: pipeline value change, deals at risk count, forecast gap
- Close with the one action the sales team needs to take this week
Output 3 — Leading indicator alert:
Define one leading indicator alert that the dashboard should monitor continuously:
Configure a leading indicator alert:
Metric: HOT-to-SAL (Sales Accepted Lead) conversion rate
Threshold: Alert when conversion rate drops below 25%
(baseline: 35% from last quarter)
Cadence: Check daily
Action: Flag in daily digest + notify sales manager
Why this metric: A dropping HOT-to-SAL rate means either the
ICP is generating false positives (scoring problem) or the
outreach is failing to convert genuine interest into meetings
(execution problem). Either way, it's a leading indicator of
future pipeline decline — the revenue impact won't appear for
6-8 weeks, but the diagnostic signal is visible now.
Deliverable: Weekly dashboard, 5-bullet executive email (150 words max), and one configured leading indicator alert with rationale.
What You Built
- A validated ICP scored against closed-won deals to detect calibration drift
- Five prospect research briefs with hallucination flags
- Five lead scores with three-dimension breakdowns and routing classifications
- Three Five Laws-compliant outreach messages with law-by-law audits
- A campaign brief with content calendar, budget allocation, and measurement framework
- A weekly revenue dashboard with pipeline metrics and leading indicator alerts
- A diagnostic log identifying all five Agent Output Taxonomy errors during live execution:
| Error Type | First Taught | What You Can Do |
|---|---|---|
| Hallucinated Data | L01 | Flag unverifiable claims in research briefs and campaign data |
| Miscalibrated Scoring | L03 | Trace false positives to specific ICP sections and recalibrate |
| Compliance Gap | L05 | Audit outreach against Five Laws AND jurisdiction-specific regulations |
| Over-Automation | L06 | Add exit conditions to sequences and human gates to agent workflows |
| Context Loss | L07 | Trace intelligence flow across pipeline stages and identify where it drops |
The division of labour that runs through every lesson: the agent researches, drafts, and recommends. The sales professional decides and sends. That boundary — inform versus act — is what makes the revenue engine trustworthy at scale.
Flashcards Study Aid
Try With AI
Use these prompts in Claude or your preferred AI assistant with your Sales, Marketing, and RevOps extension plugins installed.
Prompt 1: Reproduce the Sprint
I want to run the complete revenue engine sprint. Here is my setup:
- Product: [your product or NexaFlow's logistics data platform]
- ICP: [paste your ICP from sales-marketing.local.md]
- Target market: [your markets]
Run these steps in sequence and time each one:
1. Score 5 closed-won deals to validate ICP (all should be 60+)
2. Research 5 new prospects in my target market
3. Score all 5 and rank by composite score
4. Generate Five Laws-compliant outreach for the top 3
5. Generate a call summary for the #1 prospect
6. Build a campaign brief ($25K, 12 weeks, 50 HOT leads target)
7. Produce a weekly revenue dashboard
At the end, tell me:
- Total time for the full sprint
- How many Agent Output Taxonomy errors you spotted
- Which pipeline stage had the lowest output quality and why
What you are learning: Execution speed and error detection under pressure. The first time through this sprint, you will likely spend 40-50 minutes. The second time, with a tuned ICP and familiar pipeline, it should take 20-25 minutes. The gap between first and second run measures how much of the sprint is setup versus execution — and setup time drops to near zero once your configuration is dialled in.
Prompt 2: Transfer to a Different Industry
I have been running the revenue engine for logistics technology.
Now I want to adapt it for [choose: healthcare SaaS / fintech /
legal tech / e-commerce / cybersecurity / your industry].
For this new industry:
1. What percentage of the revenue engine transfers directly
with zero changes? (ICP structure, scoring model, Five Laws,
pipeline stages, dashboard metrics)
2. What needs reconfiguration? (ICP criteria, compliance
jurisdictions, channel allocation, content formats)
3. What is industry-specific and does not exist in the current
engine? (Regulatory requirements, sales cycle patterns,
buying committee structures, industry data sources)
Build a migration checklist: every config file, skill, and
command that needs to change, and what the change is.
What you are learning: Revenue engine portability. The Five Laws apply to any B2B outreach. The three-dimension scoring model (Fit + Timing + Engagement) works across industries — only the criteria within each dimension change. The pipeline stages are universal. What changes between industries is the ICP content, the compliance framework, the content formats that resonate with the audience, and the data sources for research. Understanding what transfers and what requires reconfiguration is the difference between rebuilding the engine from scratch for each client and reconfiguring an existing system in an afternoon.
Prompt 3: Apply to Your Business
I want to run the revenue engine sprint for my actual business.
My business: [describe your product/service]
My market: [describe your target customers]
My current pipeline: [number of active prospects, average deal size]
My biggest sales challenge: [what is not working today]
Help me:
1. Build an ICP for my business (not NexaFlow's)
2. Research 5 real prospects from my market
3. Score them against my ICP
4. Generate outreach for the top 3
5. Build a campaign brief with realistic budget for my stage
6. Produce a dashboard with the 3 metrics that matter most
for my current pipeline size
At the end, write a 3-sentence summary:
- Sentence 1: What I built (the system)
- Sentence 2: What surprised me (unexpected finding)
- Sentence 3: What I would change (improvement for next sprint)
What you are learning: The gap between demo data and real data. NexaFlow's pipeline is clean, structured, and designed to illustrate specific concepts. Your pipeline has missing data, inconsistent formatting, prospects who do not fit neatly into scoring dimensions, and deals that defy standard stage progressions. Running the engine against real data reveals which components are robust (they work despite messy inputs) and which are fragile (they break when data does not match the expected format). That fragility map is your improvement roadmap.