Prospect Intelligence and ICP Calibration
In Lesson 1, you installed the Revenue Engine, ran your first research brief, and learned to spot hallucinated data. Now you will turn Farah's intuition into a data-driven ICP that the agent can use on every prospect in the pipeline.
Farah is NexaFlow's top rep. She closes at 340% of quota while the other three reps average 60%. Ask her to describe her ideal customer and she says "mid-size 3PL, growing fast, legacy systems, new VP trying to make their mark." Ask her to explain why she passes on certain prospects and she says "you just know." That gut-level pattern recognition is real and valuable — but it lives in one person's head. When Farah is on leave, the team reverts to the generic pitch. When NexaFlow hires rep number five, there is no playbook to hand over.
This lesson extracts the patterns from NexaFlow's 20 best deals and encodes them into a configuration file that every rep — and every agent command — can use. By the end, you will have a validated ICP, a complete sales-marketing.local.md, and five ranked research briefs for NexaFlow's target prospects.
Analysing the Closed-Won Deals
In Lesson 1 you generated demo data for NexaFlow, including 20 closed-won deals. Those deals contain the raw signal. Instead of building an ICP from guesswork, you will let the data tell you who NexaFlow's real customers are.
Open the 20 closed-won deals and run:
Analyse NexaFlow's 20 closed-won deals. What patterns do you see
in company size, industry, buyer persona, trigger event, and
sales cycle length?
What to expect: The agent reads your closed-won deals from demo-data.md and produces a pattern analysis. Your output will vary based on the data you generated in L01, but look for these sections:
| Section | What It Shows | What to Verify |
|---|---|---|
| Company size range | Sweet spot and outliers across 20 deals | Range encompasses most deals (14-16 of 20) |
| Industry breakdown | Percentage distribution across verticals | Totals add up to 20 deals |
| Buyer persona | Decision-maker titles ranked by frequency | VP Ops or Director Ops appears as primary |
| Trigger events | Ranked list of events that preceded closed deals | Trigger types reference your deal records, not invented events |
| Sales cycle | Average, fastest, slowest with explanations | Fastest cycles correlate with high-urgency triggers |
| Negative patterns | Patterns from lost or stalled deals | Patterns reference your actual demo data |
Since you generated the demo data in L01, the specific numbers and company names will differ from another student's. The teaching point is the structure of the analysis — five measurable dimensions extracted from deal data — not the exact percentages.
Review your output against these sections. Farah's "you just know" is now decomposed into measurable dimensions — employee sweet spots, persona patterns, trigger events, cycle lengths. These are the signals that separate NexaFlow's closed-won customers from the rest of the market.
Notice what the data reveals that intuition does not. The trigger event breakdown shows that system failures produce the fastest sales cycles (18 days average). Farah knows this instinctively — she prioritises prospects who just had an outage. But the other reps do not know to look for that signal. The ICP will encode it.
Building the ICP
Now convert those patterns into a structured ICP configuration. The persona-icp skill auto-activates when you work through ICP prompts — you do not need to invoke it by name.
Use the persona-icp skill to build an ICP definition for NexaFlow
Technologies based on the 20 closed-won deals.
Include:
- Firmographics (industry, size, geography, revenue)
- Technographics (tech signals that indicate fit)
- Timing signals (trigger events ranked by priority)
- Persona profiles (decision maker, economic buyer, champion)
- Negative signals (hard and soft disqualifiers)
What to expect: The agent produces a structured ICP definition in YAML format. Your output will vary, but look for these sections:
| Section | What It Contains | What to Verify |
|---|---|---|
| Firmographic | Industries, company size range, geography, revenue range | Employee range encompasses 18+ of 20 closed-won deals |
| Technographic | Positive and negative technology signals | Signals map to patterns found in the deal analysis |
| Timing | High, moderate, and low priority trigger events | High-priority triggers match the top trigger events from the analysis |
| Personas | Primary buyer, economic buyer, and champion profiles | Titles match the buyer persona breakdown from the analysis |
| Negative signals | Hard and soft disqualifiers | Disqualifiers reference the failed/stalled deals from your data |
| Data sources | Where to find each signal type | Sources are real, verifiable platforms (LinkedIn, Companies House, etc.) |
The ICP dimensions are derived from YOUR demo data, so the specific industries, employee ranges, and trigger events will differ between students. The teaching point is the structure — five ICP dimensions with ranked priorities — not the exact values.
Review the output against the closed-won analysis. Does the ICP capture the patterns you identified? Two things to check immediately. First, the employee range (50-400) should encompass 18 of 20 closed-won deals — if it misses more than 2, widen it. Second, the high-priority timing signals should map to the trigger events from your analysis. If "system failure" appeared in 4 of 20 deals but the ICP omits it, add it.
Completing sales-marketing.local.md
The skeleton from Lesson 1 already gives you the company profile and a starter configuration. Now use a prompt so the agent updates the file from the ICP section onward instead of you editing from memory:
Now based on these insights, open `sales-marketing.local.md` and add the
remaining sections starting with:
## Ideal Customer Profile
[Paste the full ICP YAML from the previous section]
Then continue with:
## Brand Voice
- **Tone:** Direct, practical, no jargon. "We speak like operators, not vendors."
- **Avoid:** Buzzwords, "digital transformation", "synergy", "leverage"
- **Mirror:** Use the prospect's operational language — routes, dispatch, on-time rates
## Competitor Intelligence
- **Primary competitor:** RouteMaster Pro — legacy, expensive, slow implementation
- **Secondary:** FleetOps AI — newer, cheaper, limited to Pakistan market
- **Positioning:** Mid-price, fastest implementation (30-day go-live guarantee)
## Sales Methodology
- **Approach:** Consultative — lead with operational diagnosis, not features
- **Cycle length:** 45-60 days average
- **Decision makers:** VP Operations, Director of Operations
- **Champions:** Operations Managers frustrated with manual processes
Apply these changes directly to the file.
Run that prompt and let the agent update sales-marketing.local.md directly. Every skill in the Revenue Engine — prospect research, lead scoring, /competitive-brief — now reads this configuration and filters its output through NexaFlow's context.
Validating the ICP Against Known Deals
The ICP looks good on paper. But an ICP is a calibration instrument, not a finished document. You validate it by scoring deals whose outcomes you already know.
Take 5 of the 20 closed-won deals and score them against the ICP you just built:
Score 5 closed-won deals from demo-data.md against NexaFlow's ICP.
For each deal, show the fit score (0-100) and which ICP
dimensions contributed positively or negatively.
Sample Output:
ICP VALIDATION — 5 Closed-Won Deals
═════════════════════════════════════
Deal 1: Karachi Express Logistics (Karachi, 120 employees)
Fit Score: 88/100 — HOT
✓ Firmographic: Regional 3PL, 120 employees, Pakistan
✓ Technographic: Legacy TMS, Excel-based reporting
✓ Timing: New VP Ops hired 3 months before close
✓ Persona: VP Ops as primary buyer, Ops Manager as champion
✗ Minor gap: No negative signal check (competitor status unknown)
Deal 2: Gulf Chain Solutions (Dubai, 210 employees)
Fit Score: 82/100 — HOT
✓ Firmographic: Freight forwarding, 210 employees, UAE
✓ Timing: Won a major contract 6 weeks before engagement
✓ Persona: Director of Operations as buyer
✗ Technographic: Modern stack (cloud-native) — signals did not
match "legacy TMS" criteria. Score reduced.
Deal 3: Speedline Couriers (Lahore, 65 employees)
Fit Score: 71/100 — HOT
✓ Firmographic: E-commerce fulfilment, Pakistan
✓ Timing: System failure triggered urgent evaluation
✗ Company size: 65 employees — near lower boundary
✗ Persona: CEO-led sale (no operational champion)
Deal 4: Northern Freight (Islamabad, 95 employees)
Fit Score: 76/100 — HOT
✓ Firmographic: Regional 3PL, Pakistan
✓ Technographic: Manual dispatch, Excel reporting
✓ Persona: VP Ops buyer, Fleet Coordinator champion
✗ Timing: No clear trigger event — slow sales cycle (87 days)
Deal 5: Falcon Logistics (Abu Dhabi, 180 employees)
Fit Score: 85/100 — HOT
✓ Firmographic: 3PL, UAE
✓ Technographic: Legacy WMS, hiring for automation roles
✓ Timing: New geography expansion into Saudi Arabia
✓ Persona: COO as economic buyer, Ops Manager champion
SUMMARY: All 5 deals score 60+ (HOT)
Average fit score: 80.4
Lowest: Deal 3 (71) — CEO-led, near size boundary
Highest: Deal 1 (88) — textbook ICP match
All five deals scored 60 or above. The ICP is validated against known outcomes.
You are scoring historical deals to validate the ICP, not to prioritise them for outreach. If your best deals do not score HOT, your ICP definition is wrong — not the deals. The lead-scoring skill is designed for prospects, not closed-won deals. This is a calibration exercise that works because Fit scores are retrospective. Go back to the ICP and find the missing dimension. Common fixes: widening the employee range, adding an industry vertical you overlooked, or adding a timing signal that the analysis surfaced but the ICP omitted.
Notice Deal 2 — Gulf Chain scored lower on technographics because they had a modern stack, not legacy systems. This is useful feedback. Not all closed-won customers match the "legacy TMS" signal. You might add "Operational complexity exceeding current tooling capacity" as an additional positive technographic signal to catch companies like Gulf Chain that have modern infrastructure but still need workflow automation at scale.
Deep Research on 5 Prospects
With a validated ICP, turn to NexaFlow's actual pipeline. At this point, keep the three entity types separate:
- Closed-won deals are historical customers used to build and validate the ICP.
- Target prospects are the 5 open opportunities in
demo-data.mdthat you now research and rank. - Competitors are alternative vendors (RouteMaster Pro, FleetOps AI) that you analyse later for positioning.
You are still working as NexaFlow throughout. The only thing changing is what you are analysing: past customers, current prospects, or competitors.
Start by re-running Meridian Logistics, the same prospect from Lesson 1, so you can see how the calibrated ICP changes the brief. Then run the same workflow for the other 4 prospects in demo-data.md.
Use this prompt pattern for each prospect:
Read demo-data.md and find one target prospect.
Generate a full research brief with:
- ICP fit score
- timing signals
- data availability assessment
- recommended outreach angle
For example:
Research TransPak Logistics — build a full intelligence brief.
TransPak Logistics, Karachi. Regional 3PL specialising in
pharmaceutical cold chain. ~150 employees.
After you have all 5 briefs, compare two of them side by side: Meridian Logistics in Leeds (prospect #1 from Lesson 1) versus TransPak Logistics in Karachi. The purpose of this comparison is not to choose between two companies at random. It is to see how data availability changes brief quality across markets even when both prospects may fit the ICP.
Comparing Data Availability: Karachi vs Leeds
PROSPECT COMPARISON — Data Availability
════════════════════════════════════════
TransPak Logistics (Karachi)
────────────────────────────
Firmographic: SECP registration confirms company details ✓
Technographic: Job postings on Rozee.pk (3 ops roles) ✓
Timing: No recent press coverage — limited signal
Financial: No public revenue data — private company
Persona: LinkedIn profiles sparse — 2 of 5 leaders found
Data quality: MODERATE — public records exist but media
coverage and social presence are thin
Meridian Logistics (Leeds, UK)
──────────────────────────────
Firmographic: Companies House filing — verified revenue,
employee count, registered directors ✓
Technographic: 12 job postings on Indeed and LinkedIn ✓
Timing: TechCrunch coverage of recent contract win ✓
Financial: Annual accounts filed at Companies House ✓
Persona: Full LinkedIn profiles for leadership team ✓
Data quality: HIGH — UK regulatory filings, active press,
strong social presence
The difference is structural, not a reflection of company quality. UK companies file annual accounts at Companies House, making revenue and director information public by law. Pakistani companies file with SECP, but the filing requirements are lighter and media coverage of mid-market companies is thinner.
This matters for your sales process. For the Karachi prospect, the agent supplements public data with inference — and inference means higher hallucination risk. Apply the three detection rules from Lesson 1 more aggressively on briefs with thin public data. For the London prospect, the agent has richer verified sources, so the brief is more reliable — but you still check financial claims against Companies House filings rather than trusting the agent's numbers.
When the AE provides information that cannot be independently verified (e.g., "he came from Stripe I think"), the brief must either verify it with a source or explicitly flag it as user-provided and requiring verification. Never present unverified AE input as confirmed fact.
Ranking the 5 Prospects
After running all 5 briefs, rank them:
Rank these 5 research briefs by total fit (ICP score +
timing strength + data availability). For the lowest-scoring
prospect, identify which ICP dimensions they fail on.
Sample Output:
PROSPECT RANKING — NexaFlow Pipeline
═════════════════════════════════════
Rank Prospect Fit Timing Data Total
──── ────────────────────────── ──── ────── ────── ─────
1 Meridian Logistics (Leeds) 82 HIGH HIGH 91
2 TransPak Logistics (KHI) 78 MOD MOD 74
3 Al-Safa Transport (Dubai) 75 HIGH MOD 77
4 Greenline Express (LHR) 68 LOW MOD 58
5 Coastal Freight (KHI) 52 LOW LOW 42
LOWEST SCORER: Coastal Freight (52 fit, 42 total)
✗ Firmographic: 30 employees — below 50 minimum
✗ Technographic: No automation signals in job postings
✗ Timing: No trigger events detected
✗ Persona: No VP/Director level — CEO-only company
→ Recommendation: REMOVE from pipeline
The ranking reveals something counterintuitive. Meridian — a UK company outside NexaFlow's primary geography — scores highest because data availability and timing strength compensate for the geographic distance. This is the ICP working as designed: it separates signal from noise and lets you make decisions based on dimensions, not gut feeling.
Competitive Positioning
One piece remains. NexaFlow's sales team needs to know how they stack up against RouteMaster Pro and FleetOps AI. This is a separate task from prospect research: prospect briefs tell you who to prioritise, while competitor briefs tell you how to position when those prospects evaluate alternatives.
Start with RouteMaster Pro:
/competitive-brief RouteMaster Pro
Sample Output:
COMPETITIVE BRIEF — RouteMaster Pro
════════════════════════════════════
OVERVIEW
Founded: 2012 | HQ: Singapore | Employees: ~400
Product: Enterprise TMS for large logistics operators
Markets: Southeast Asia, Middle East, UK
POSITIONING vs NexaFlow
RouteMaster strengths:
- Enterprise-grade (500+ employee companies)
- Deep regulatory compliance (cross-border documentation)
- 10-year track record and reference customers
RouteMaster weaknesses:
- 6-12 month implementation cycle (vs NexaFlow 30 days)
- Pricing starts at $5,000/month (vs NexaFlow $800/month)
- Legacy architecture — no API-first integration
NexaFlow wins when:
- Prospect needs go-live in under 90 days
- Budget under $3,000/month
- Mid-market (50-400 employees) — RouteMaster oversells
RouteMaster wins when:
- Prospect has 500+ employees and cross-border operations
- Compliance requirements dominate the buying criteria
- Prospect values vendor longevity over speed
Then run the same command for FleetOps AI. After reviewing both outputs, update your configuration with a prompt instead of editing from memory:
Open `sales-marketing.local.md` and update the `## Competitor Intelligence`
section using the competitive briefs for RouteMaster Pro and FleetOps AI.
Keep this structure:
- **Primary competitor:** RouteMaster Pro
- **Secondary:** FleetOps AI
- **Positioning:** Short summary of where NexaFlow wins, where each
competitor wins, and which differences should shape outreach
Apply the update directly to the file.
Run that prompt and let the agent update sales-marketing.local.md directly. Every subsequent research brief and outreach draft will reference this positioning automatically.
What You Built
- ICP definition validated against 5 closed-won deals (all scored 60+ HOT)
- Complete
sales-marketing.local.mdwith company profile, brand voice, ICP, competitor intel, and sales methodology - 5 research briefs with fit ranking and data-availability comparison
- Understanding of how ICP quality directly affects the relevance and accuracy of every downstream command
Flashcards Study Aid
Test your understanding of the key concepts from this lesson.
Try With AI
Prompt 1 (Reproduce)
Build an ICP definition for NexaFlow using the 20 closed-won deals.
Identify: firmographics, technographics, timing signals, persona
profiles, and negative signals.
What you're learning: How closed-won analysis reveals patterns that intuitive sellers like Farah know implicitly but cannot articulate. The agent decomposes "you just know" into measurable dimensions — employee range, trigger events, buyer personas — that can be taught, configured, and scaled across the team.
Prompt 2 (Adapt)
Take the ICP you built and score the 5 target prospects from the
demo data. Rank them by total fit. For the lowest-scoring prospect,
identify which ICP dimensions they fail on.
What you're learning: An ICP is a diagnostic tool — it tells you not just WHO to pursue but WHY some prospects are weak. When the lowest-scoring prospect fails on firmographics versus timing versus persona, the diagnosis is different and the response is different. Firmographic failure means remove from pipeline. Timing failure means monitor and revisit. Persona failure means find a different entry point.
Prompt 3 (Apply)
Build an ICP for your own business (or a business you know well)
using 10+ past deals. Run /competitive-brief for your top competitor
and add the competitive positioning to your ICP definition.
What you're learning: ICP and competitive positioning together form the foundation that every subsequent lesson builds on. Lead scoring (L03) scores prospects against this ICP. Outreach (L05) references this competitive positioning. Campaign planning (L10) targets this persona profile. Getting the foundation right here means every downstream command produces more relevant output.