Skip to main content
Updated Mar 13, 2026

Campaign Performance Analysis

In Lesson 10, you built a 12-week campaign with a content calendar and measurement framework. It is now Week 5. The numbers are in. Time to analyse.

NexaFlow's whitepaper downloads are above target. LinkedIn impressions look strong. But LinkedIn click-through rate is just below benchmark. Email open rates are solid, yet email click rates are weak. Trade press performance is below expectation. Zara has a dashboard full of green and amber indicators. The question is not "how are we doing?" — the question is "what should we do differently next week?"

Most marketing teams stare at dashboards and draw intuitive conclusions. This lesson teaches a different approach: generate structured analysis with specific optimisation recommendations, then evaluate those recommendations with your domain expertise before acting. You will run the base plugin's /performance-report, compare it against the extension's deeper analysis, pull competitive intelligence with /competitive-brief, and set up a weekly cadence that turns data into decisions.

The Demo Data: Week 5 Campaign Results

Before you can analyse performance, you need performance data. If you have a connected analytics platform (Amplitude, HubSpot, LinkedIn Campaign Manager), the /performance-report command can pull data directly through connectors. If you do not have connectors set up — which is the case for most students working through this lesson — generate demo data that matches NexaFlow's Week 5 scenario:

Generate Week 5 performance data for NexaFlow's Q2 campaign.

Include these channels with the following performance characteristics:
- Whitepaper downloads: above the 50-lead target pace (on track for 55-60)
- LinkedIn impressions (PK/UAE): above target
- LinkedIn CTR (PK/UAE): just below the 0.8% benchmark (showing 0.72%)
- LinkedIn Ads (UK): CPL at $680, below the $750 pause threshold
- Email open rate: strong (28%)
- Email click rate: weak (1.8%, below the 3% benchmark)
- Cost per whitepaper download: $380
- Cost per HOT lead (score 60+): $520
- Trade press mentions: 1 (target was 3 by Week 5)
- Google Search CTR: 3.2% (above the 1.5% pause threshold)
- WhatsApp response rate (PK): 24%
- LinkedIn Organic engagement: 3.8%

Format as a structured table with: channel, metric, actual value, target/benchmark, status (above/below/at target).

Output:

NEXAFLOW Q2 CAMPAIGN — WEEK 5 PERFORMANCE DATA
══════════════════════════════════════════════════════════════
Channel Metric Actual Target Status
─────────────────────────────────────────────────────────────
Whitepaper Downloads Total downloads 28 25 Above ✓
LinkedIn Ads (PK/UAE) Impressions 45,200 40,000 Above ✓
LinkedIn Ads (PK/UAE) CTR 0.72% 0.80% Below ✗
LinkedIn Ads (PK/UAE) CPL $310 $400 Above ✓
LinkedIn Ads (UK) CPL $680 $750 Below ✓
LinkedIn Ads (UK) Leads generated 4 5 Below ✗
Email Nurture Open rate 28% 22% Above ✓
Email Nurture Click rate 1.8% 3.0% Below ✗
Google Search CTR 3.2% 1.5% Above ✓
Google Search Leads generated 4 4 At target
WhatsApp (PK) Response rate 24% 20% Above ✓
LinkedIn Organic Engagement rate 3.8% 2.0% Above ✓
Trade Press Mentions 1 3 Below ✗
Cost Metrics Cost per download $380 $500 Above ✓
Cost Metrics Cost per HOT lead $520 $500 Below ✗
HOT Leads (score 60+) Total 18 21 Below ✗
══════════════════════════════════════════════════════════════

Save this data. You will paste it into the performance analysis commands.

Running /performance-report

The /performance-report command from the base marketing plugin takes campaign data and returns a structured analysis with optimisation recommendations. If you have analytics connectors, the command pulls data automatically. Without connectors, paste the Week 5 data:

/performance-report
Campaign: NexaFlow Q2 Lead Generation
Period: Week 5 of 12

[Paste the Week 5 performance data table from above]

Provide: channel-by-channel analysis, 3 optimisation recommendations,
and a budget reallocation suggestion.

What to expect: The /performance-report produces a structured campaign analysis. Your output will vary, but look for these sections:

SectionIntentWhat to Verify
Channel performance summaryPer-channel status with action recommendationEach channel rated (STRONG/GOOD/WATCH/WEAK) with specific next step
Recommendations (3)Specific optimisation actionsEach recommendation names the problem, the fix, and expected impact
Budget reallocation suggestionWhere to move money based on efficiencyShifts budget FROM underperforming TO high-efficiency channels
Your output will vary

The report depends on the Week 5 data you generated. The teaching point is evaluating the recommendations: (1) Is each recommendation specific enough for Zara to execute Monday morning? (2) Is the expected impact realistic? (3) Can the team actually do this given their capacity? A recommendation that fails any of these tests needs iteration.

Comparing the Extension's Performance Analysis

Now run the same data through the extension's performance-analysis skill. The extension auto-activates because the prompt includes campaign performance data with ICP and multi-market context. Paste the same Week 5 data and add:

Analyse this Week 5 data using the performance-analysis skill.
Include ICP-filtered analysis, regional benchmark comparison,
and three-dimension scoring integration.

[Paste the same Week 5 performance data]

The extension adds three dimensions the base report does not cover:

What to expect from the extension:

SectionIntentWhat to Verify
ICP-filtered analysisWhat percentage of leads match your target personaBreakdown: ICP-matched vs adjacent vs non-ICP
Regional benchmark comparisonYour metrics vs local market benchmarks (not global)Pakistan, UAE, and UK each have different baselines
Three-dimension scoring integrationFit/Timing/Engagement breakdown of HOT leadsIdentifies which scoring dimension is weakest across your leads
Your output will vary

The extension adds context the base report lacks. The teaching point is the difference in judgment: the base report may flag a metric as "below target" using a global benchmark, while the extension shows it is actually above the local benchmark. Your domain expertise decides which report leads to better decisions. The real problem may not be the metric the base report flagged — it may be ICP match rate or timing score weakness.

What the Extension Adds

Read both reports side by side. Three differences typically change the recommended actions:

DimensionBase PluginExtension
Benchmark contextUses global benchmarksUses regional benchmarks (which may tell a different story)
Lead qualityCounts all leads equallyFilters by ICP match — reveals whether targeting is too broad
Lead readinessReports HOT leads as a single countBreaks into Fit/Timing/Engagement — shows which dimension is weakest

The base report may flag a channel as underperforming. The extension may show that same channel is actually above the local benchmark — the "problem" was a global target that does not account for regional differences. Your domain expertise decides which report leads to better budget decisions.

Evaluating the Recommendations

Both reports produce recommendations. Before acting on any recommendation, evaluate it through three questions:

Is it specific enough to execute? "Improve email CTAs" is vague. "Rewrite emails 4-6 with value-specific CTAs using customer metrics from the TCS case study" is specific. Zara can act on the second version Monday morning. If a recommendation requires a follow-up conversation to clarify what it means, it is not specific enough.

Is the expected impact realistic? The base report suggests email click rate improvement to 2.5-3.5% from a CTA rewrite. Is that realistic? Industry benchmarks show CTA-specific rewrites typically improve click rates by 0.5-1.5 percentage points. From a 1.8% baseline, 2.3-3.3% is a reasonable range. The report's estimate of 2.5-3.5% is slightly optimistic but not unrealistic.

Can NexaFlow's team actually execute this? The trade press recommendation calls for pitching 2 contributed articles. Zara is the only content person. She is already producing 3 content calendar entries per week (from L10). Adding 2 article pitches means 5 content tasks in one week. Evaluate whether this is feasible or whether the pitch should be spread across 2 weeks.

If a recommendation fails any of these three tests, iterate:

Recommendation 2 says to pitch 2 articles this week.
Zara is already at capacity with 3 content calendar entries.
Revise: spread the pitches across weeks 6 and 7 (one per week).
What is the adjusted timeline for reaching 3 trade press mentions?

The agent adapts the recommendation to the capacity constraint. This is the evaluation loop: generate recommendation, test against reality, refine until executable.

Competitive Context with /competitive-brief

Campaign performance does not exist in a vacuum. What competitors say shapes how prospects perceive your message. Run /competitive-brief to pull positioning intelligence:

/competitive-brief
Company: LogiFlow Solutions (NexaFlow's primary competitor)
Market: 3PL logistics automation, Pakistan and UAE
Focus: How they position against smaller competitors like NexaFlow

What to expect: The /competitive-brief produces positioning intelligence. Your output will vary, but look for these sections:

SectionIntentWhat to Verify
PositioningHow the competitor describes themselvesKey messages and market claims
NexaFlow gapWhere the competitor's positioning exposes your differencesSpecific dimensions where you compete differently
Content analysisWhat the competitor is publishingVolume, topics, engagement levels
Messaging opportunityGaps the competitor leaves openAreas where NexaFlow can own the conversation
Your output will vary

Competitive intelligence from an agent is a starting point. Verify claims against current public sources. The teaching point is using the brief: take the messaging opportunity and feed it into next week's content calendar entries to differentiate against the competitor's blind spots.

Meridian Logistics in Leeds faces a different competitive landscape. Their competitors lead with post-Brexit customs automation and HMRC compliance — a positioning battle where regulatory credibility matters more than AI capability. The competitive brief for Meridian's market would emphasise compliance and established client references rather than technology differentiation. This is why competitive positioning is market-specific: the same company needs different messaging in Karachi versus Leeds.

Using Competitive Intel in Week 6

Take the messaging opportunity from the competitive brief and feed it into next week's content:

Given this competitive brief, adjust NexaFlow's Week 6 content
calendar entries. The content calendar currently has:
- Blog post on warehouse automation
- LinkedIn article on fleet tracking
- Email nurture #4

Adjust messaging to differentiate against LogiFlow's
enterprise-and-compliance positioning. Lean into NexaFlow's
AI automation advantage and "built for growing 3PLs" message.

The agent revises the content angles. The warehouse automation blog becomes "How AI Dispatch Beats Manual Workflows — What Enterprise Platforms Won't Tell You." The LinkedIn article shifts from generic fleet tracking to "Why Growing 3PLs Need AI-Native Tools, Not Retrofitted Enterprise Software." Each adjustment positions NexaFlow's strength against LogiFlow's blind spot.

The Weekly Cadence

Analysis without rhythm is a one-time exercise. Build a recurring cadence that turns performance data into weekly decisions:

DayActivityWhoOutput
MondayReview channel metrics against thresholdsZaraPause/continue decisions per channel
WednesdaySales + marketing align on lead qualityZara + Sales Rep 1Feedback on which leads converted
FridayRun /performance-report + extension analysisZara3 recommendations for next week

Why This Order Matters

Monday starts with channel metrics because you need to know immediately if any channel has hit a pause threshold from L10's measurement framework. If UK LinkedIn CPL crosses $750 for the second consecutive week, you pause spending Monday morning — not Friday afternoon after spending another $700.

Wednesday aligns sales and marketing on lead quality. Marketing generated 28 downloads, but sales worked only 18 HOT leads. Which of those 18 converted to meetings? Which were dead ends despite high scores? This feedback loop corrects the scoring model from L03 and the ICP calibration from L02. Without Wednesday alignment, marketing optimises for volume and sales complains about quality.

Friday runs the full performance report because you need a complete week of data. Monday's channel check is a quick threshold scan. Friday's report is the comprehensive analysis that produces next week's three optimisation actions.

Connecting to L13's Revenue Dashboard

In Lesson 13, you will express this cadence through real plugin assets. The Lead Intelligence Agent and Revenue Reporting Agent replace much of Monday's manual scan. The pipeline skill or /pipeline-review sharpens Wednesday's alignment meeting. pre-call-brief handles meeting prep when a live deal needs context. The /performance-report stays human-driven because recommendations need domain judgment before execution.

What You Built

  1. A Week 5 campaign analysis with 3 specific optimisation actions tied to team capacity
  2. A budget reallocation recommendation based on channel efficiency and regional benchmarks
  3. A competitive brief for differentiated positioning against NexaFlow's primary competitor
  4. A weekly Monday-Wednesday-Friday review cadence connecting to L10's measurement framework
  5. The judgment to distinguish observation-only reporting (CTR is 0.72%) from actionable analysis (CTR is above local benchmark; the real problem is ICP match rate)

Try With AI

Use these prompts in Claude or your preferred AI assistant.

Prompt 1: Reproduce and Compare (Reproduce)

Generate Week 5 performance data for a B2B campaign targeting
VP Operations at logistics companies. Include at least 8 channels
with a mix of above-target and below-target metrics.

Then run two analyses:
1. A standard performance report with 3 recommendations
2. An ICP-filtered analysis that checks what percentage of leads
match the target persona

List the 3 most impactful differences between the reports.
Which report would lead to better budget decisions, and why?

What you are learning: Standard performance reports treat all leads equally. ICP-filtered analysis reveals whether your budget is reaching the right people. A campaign can look healthy on volume metrics (downloads, impressions) while wasting budget on non-ICP contacts. Learning to compare both views builds the habit of questioning surface-level metrics.

Prompt 2: Competitive Positioning Shift (Adapt)

Run a competitive brief for a competitor in your industry
(or use this example):

Company: [Name a real or fictional competitor]
Market: [Your industry and geography]
Focus: How they position against companies like yours

Once you have the brief, answer:
1. What messaging gap does the competitor leave open?
2. How would you adjust next week's content to exploit that gap?
3. If the competitor changes their positioning next month to
close the gap, what is your fallback differentiation?

Compare this to NexaFlow's competitive brief against LogiFlow.
What structural similarities do you see in how competitive
positioning analysis works across different industries?

What you are learning: Competitive positioning is not a one-time exercise. Competitors adapt. The value of /competitive-brief is not the snapshot — it is the cadence. Running it monthly reveals positioning shifts before they affect your pipeline. The fallback differentiation question builds strategic thinking: if your current advantage disappears, what is your next one?

Prompt 3: Your Own Campaign Data (Apply)

If you have access to campaign data from any source — LinkedIn
Campaign Manager, Google Ads, email marketing platform, or even
social media analytics — paste the raw numbers and ask:

1. Analyse this data. Which channels are performing above and
below target? (If I have not defined targets, suggest
industry benchmarks for my market.)
2. Give me 3 specific optimisation recommendations. For each one,
tell me: what to change, expected impact, and how long until
I see results.
3. Compare your recommendations to what I would have done
intuitively. Where does AI analysis add value that gut
instinct would miss?

If you do not have campaign data, use your personal social media
analytics (LinkedIn post engagement, newsletter open rates) as
a starting point. The analysis principles are the same.

What you are learning: Campaign analysis skills transfer across platforms and scales. The same questions — which channels justify their cost, what should I change next week, is this recommendation executable — apply whether you manage a $25,000 B2B campaign or a personal LinkedIn presence. By comparing AI recommendations to your intuition, you discover where structured analysis adds value beyond what experienced marketers already know.

Flashcards Study Aid