Skip to main content

People Analytics & Agent Operations

Every Monday morning, the CHRO at the EdTech company in Karachi opens four reports. The first is the Knowledge Base Agent's weekly query summary: 312 employee queries handled this week, up from 247 last week, with a spike in questions about the new flexible working policy. The second is the Onboarding Orchestrator's completion report: three new joiners started this week; pre-boarding tasks are 94% complete; one new joiner has not completed their IT access setup, flagged for follow-up. The third is the Policy Maintenance Agent's monthly audit: two statutory rate changes detected, both updated and published; one policy cross-reference inconsistency identified and queued for HR review. The fourth is a recruiting pipeline update for the Team Lead, Data Engineering role: 18 candidates sourced, 9 screened, 4 in interview, average time-in-stage at screening is 11 days — above the 7-day target.

Each report answers a different question. Together, they tell a story about the health of the HR function. The KB agent report says the flexible working policy launch is creating confusion — employees are asking the same questions the policy should have answered clearly. The onboarding report says the process is working well overall but one new joiner needs a follow-up call about IT access. The policy report says the statutory rate monitoring is functioning and two changes were handled without manual intervention. The recruiting pipeline report says the screening stage is slow — something is wrong with how candidates are being evaluated or progressed.

This lesson covers two official plugin skills — /people-report for workforce analytics and /recruiting-pipeline for hiring pipeline management — and then does something that the tool descriptions do not make explicit: it teaches you to read your four persistent agents not as automation systems, but as sensors. Their reports are operational intelligence. The question is whether you know how to extract the signal from them.

People Analytics with /people-report

/people-report generates structured workforce reports from your HR data. It operates across four report types, each answering a different strategic question.

Report Types and the Questions They Answer

Report TypeBusiness QuestionKey Metrics
HeadcountWhere are we? Who do we have, doing what, where?Total headcount by team, level, location; tenure distribution
AttritionAre we keeping people? Who is leaving, from where, and why?Voluntary attrition rate; regrettable attrition; average tenure at exit
DiversityIs our talent pipeline representative?Representation by level; promotion rates by group; pipeline diversity
Org HealthIs our structure working? Are there structural risks?Span of control; management layers; flight risk indicators; eNPS

Key Metrics Worth Tracking

MetricWhat It RevealsWarning Threshold
Voluntary attrition rateOverall retention healthIndustry dependent — know your benchmark
Regrettable attritionAre we losing the right people or the wrong ones?Any regrettable attrition >5% warrants investigation
Average tenure at exitIs the problem early-tenure or late-tenure?Patterns shift interpretation completely
eNPS (Employee Net Promoter Score)Employee sentiment and advocacyBelow 0 is a warning sign
Span of controlManagement structure health<3 or >12 direct reports flags structural issues
Time to productivity (new hires)Onboarding effectivenessBenchmark against your org's historical baseline

Worked Example: Quarterly People Report

The EdTech company (250 people, Karachi, three years of growth) is preparing for a board meeting. The CHRO needs a one-page view of workforce health.

/people-report quarterly headcount and attrition

Organisation: EdTech company, 250 employees, Karachi, Pakistan
Reporting period: Q1 (January–March)
Data provided:
- Total headcount: 250 (start of quarter: 238)
- New hires: 18 (across Engineering, Product, Sales)
- Departures: 6 (4 voluntary, 2 involuntary)
- Voluntary departures: 2 from Engineering (Senior level), 1 from Sales, 1 from Customer Success
- Average tenure at exit (voluntary): 2.3 years
- eNPS score: +22 (from quarterly survey, 78% participation)
- Team growth: Engineering +8, Product +4, Sales +6

Sample output:

## People Report: Quarterly Headcount & Attrition — Q1

### Executive Summary
Three key findings this quarter:
1. Headcount grew 5% (238 → 250); growth concentrated in Engineering and Sales as planned.
2. Voluntary attrition rate: 1.6% for the quarter (~6.4% annualised). Two Senior Engineering departures warrant investigation — they represent the most expensive and hardest-to-replace exits.
3. eNPS of +22 is healthy; participation at 78% is strong and provides a reliable signal.

### Key Metrics
| Metric | Q1 Value | Trend |
|-----------------------------|----------|---------|
| Total headcount | 250 | ↑ +12 |
| Voluntary attrition (qtr) | 1.6% | ↑ from 0.8% |
| Regrettable departures | 2 | ↑ (both senior) |
| Average tenure at exit | 2.3 yrs | — |
| eNPS | +22 | ↑ from +18 |

### Detailed Analysis
**Attrition concern: Senior Engineering**
Both voluntary departures from Engineering were at Senior level with 2-3 years tenure.
This is the highest-cost attrition pattern: senior engineers at 2-3 years are past their
productivity ramp, fully integrated, and typically hold significant institutional knowledge.
Pattern requires investigation before it becomes a trend.

Recommended action: Conduct stay interviews with remaining Senior Engineers (L5+) this
quarter. Check compensation against market benchmarks using /comp-analysis.

### Recommendations
- Priority: Investigate Senior Engineering attrition. Two exits in one quarter from the
same level may signal a compensation, culture, or management issue.
- Monitor: eNPS is positive and rising, but survey themes should be reviewed for early
signals in the Engineering team specifically.
- Continue: Onboarding velocity is strong (18 hires integrated with minimal incidents).

### Methodology
Attrition rate calculated as: (voluntary departures / average headcount) × 100.
Annualised by multiplying quarterly rate by 4. Regrettable = departures flagged by
manager as undesired exits of high performers.
Your output will vary

The quality of a /people-report output depends entirely on the data you provide. If you upload a full HRIS export, the analysis will be richer and more specific. If you describe the data in text, the analysis will be directional. The framework — executive summary, key metrics, detailed analysis, recommendations — is consistent regardless of data richness.

Recruiting Pipeline with /recruiting-pipeline

/recruiting-pipeline tracks the hiring funnel from sourcing through offer acceptance. Its primary diagnostic value is in time-in-stage analysis: where are candidates spending too long, and what does that tell you about what is broken?

Pipeline Stages and What Delays Reveal

StageDescriptionIf Delayed, Signals...
SourcedIdentified and reached outWeak sourcing or unappealing outreach
ScreenPhone or video screenJD attracted wrong candidates; screening criteria unclear
InterviewPanel or structured interviewScheduling friction; hiring manager availability
DebriefTeam calibration and decisionCalibration misalignment; decision-making confusion
OfferExtending the offerApproval bottlenecks; compensation concerns
AcceptedOffer acceptedCompetitive counteroffer; candidate experience failure

Worked Example: Team Lead, Data Engineering

Following from L09 — Zara accepted the internal promotion. The EdTech company now needs to hire a Data Engineer externally. Omar wants to understand where the pipeline stands.

/recruiting-pipeline Team Lead, Data Engineering — external hire pipeline

Pipeline data (as of Day 28 from job opening):
Sourced: 24 candidates
Screen scheduled: 14 (58% conversion from sourced)
Screen completed: 9 (64% of scheduled; 5 no-shows or withdrew)
Interview: 4 (44% conversion from screen)
Debrief: 1 (currently in progress)
Offer: 0
Time-in-stage (days average):
Sourced → Screen: 5 days
Screen → Interview: 12 days ← above target
Interview → Debrief: 6 days
Time to fill target: 45 days from job opening

Sample output:

## Recruiting Pipeline: Data Engineer — EdTech, Karachi

### Pipeline Summary (Day 28)
| Stage | Count | Conversion | Time in Stage | Target |
|-------------|-------|------------|---------------|--------|
| Sourced | 24 | — | — | — |
| Screen | 9 | 58% | 5 days | ≤7 days |
| Interview | 4 | 44% | 12 days ← ⚠️ | ≤7 days |
| Debrief | 1 | 25% | 6 days | ≤5 days |
| Offer | 0 | — | — | — |

### Diagnostic Finding
**Screen → Interview conversion (12 days average) is the primary bottleneck.**
Candidates are sitting at screen stage for 12 days before receiving an interview invitation.
This is nearly double the 7-day target and likely causes candidate drop-off — strong
candidates at this level are typically in multiple processes simultaneously.

**44% screen → interview conversion** is also below a healthy benchmark.
This could indicate screening criteria are too narrow, or the screen process is
not accurately assessing candidate quality for the role.

### Recommendations
1. **Immediate**: Audit scheduling process for screen → interview transition.
Are interviews being offered within 48 hours of a screen completing?
2. **Review**: Check screen assessment criteria. A 44% pass rate may indicate
the JD is attracting candidates who are close but not qualified — or the
screen rubric is too strict for this level.
3. **Risk**: At current velocity, Day 45 time-to-fill target is at risk.
Consider parallel-tracking 5 additional sourced candidates now.

Agent Reports as Operational Intelligence

This is the insight most HR teams miss: the four persistent agents are not just automation systems. They are sensors. Every week and every month, they produce reports that contain operational intelligence about the health of the HR function — if you know how to read them.

The Knowledge Base Agent: Query Intelligence

The KB agent answers employee questions 24/7. Its weekly report is not a log of answers — it is a map of where your HR knowledge infrastructure is working and where it is failing.

SignalWhat It MeansAction
Rising query volume on a topicEmployees cannot find an answer — missing or confusing documentationReview and update the relevant policy or FAQ
Repeated exact same questionFAQ entry does not exist or is hard to findAdd or surface the FAQ entry
High escalation rate on a topicKB agent is reaching its knowledge boundary hereExpand knowledge base or redesign the FAQ for this topic
Queries about a new policyLaunch created confusion — policy language unclearSimplify policy language and republish

Sample KB agent weekly digest (what the CHRO reads):

KNOWLEDGE BASE AGENT — WEEKLY REPORT
Week of: [Date]
Total queries handled: 312
Escalated to HR: 14 (4.5%)

TOP QUERY CATEGORIES THIS WEEK:
1. Flexible working policy: 67 queries (21%) ↑ from 8 last week
→ Signal: New policy launch created confusion. Most common sub-question:
"What counts as a 'core hours' day vs a flexible day?"
→ FAQ gap: Core hours definition not in FAQ. Recommend: add specific Q&A.

2. Holiday booking process: 44 queries (14%) — stable
→ No action needed; standard seasonal pattern

3. Salary review timeline: 38 queries (12%) — new this week
→ Signal: Employees anticipating the upcoming review cycle.
→ Recommendation: Publish FAQ on timeline and process now, before
volume increases further.

ESCALATION PATTERNS:
5 queries escalated re: flexible working individual situations
→ 3 involved requests to work from overseas (policy unclear for this case)
→ Recommend: HR to clarify overseas working policy before next week

AGENT ACCURACY REVIEW:
Self-assessed accuracy: 94% (based on source citations matched to queries)
Flagged for review: 3 responses where policy source may be outdated
→ Policy Maintenance Agent has a cross-reference check scheduled for next week

The Onboarding Orchestrator: Process Health

The orchestrator runs the onboarding workflow automatically (T-14 to Day 90). Its completion reports tell you whether the process is working — and where new joiners are falling behind.

SignalWhat It MeansAction
Pre-boarding task completion <90%New joiners are arriving unprepared on Day 1Follow up with specific incomplete tasks; review task design
30-day survey scores trending downOnboarding experience is deterioratingInvestigate — new manager? changed programme?
IT access delays flaggedIT provisioning is a consistent friction pointFix the process upstream with IT
One new joiner consistently behind on milestonesPotential early warning of poor fit or manager support gapHR check-in call

The Policy Maintenance Agent: Compliance Currency

The policy maintenance agent monitors statutory rates and policy consistency monthly. Its reports tell you whether your policy infrastructure is current and internally consistent.

SignalWhat It MeansAction
Statutory rate change detectedA mandatory rate (minimum wage, sick pay, etc.) has changedReview update before it goes live
Cross-reference inconsistency flaggedTwo policies contradict each otherHR review and resolution
Policy last-reviewed date >12 monthsPolicy is overdue for reviewSchedule a review with the relevant HR lead

The Offboarding Knowledge Agent: Departure Quality

The offboarding agent triggers when a resignation record appears in the HRIS and automates the knowledge capture and handover process. Its reports tell you whether departing employees are leaving their knowledge behind.

SignalWhat It MeansAction
Knowledge capture completion rate <80%Employees are leaving without full knowledge transferInvestigate whether the process is starting early enough
High-risk departure flaggedA departing employee holds critical, undocumented knowledgeHR to ensure a structured knowledge interview (see L10)
Handover plan not confirmedManager has not signed off on handoverFollow up immediately

The HR Intelligence Dashboard

The CHRO does not read four separate reports independently. They look for patterns across all four — signals that, together, tell a story about the health of the HR function.

A conceptual HR intelligence dashboard might look like this:

MetricSourceFrequencyAction Threshold
Top query categories + volume deltaKB Agent weekly reportWeekly>50% spike in any category → investigate
Escalation rateKB Agent weekly reportWeekly>7% → review KB knowledge base
Pre-boarding task completionOnboarding OrchestratorPer cohort<90% → follow up with new joiners
30-day survey scoreOnboarding OrchestratorMonthlyBelow org baseline → manager conversation
Statutory rate changes pendingPolicy Maintenance AgentMonthlyAny → schedule HR review
Knowledge capture completionOffboarding Knowledge AgentPer departure<80% → HR intervention
Voluntary attrition rate/people-reportQuarterlyTrend increase >2pp → investigation
Time-in-stage (screening)/recruiting-pipelinePer active role>7 days → audit screening process

The dashboard does not replace judgment — it directs attention. When the KB agent shows a spike in flexible working queries and the policy maintenance agent has just flagged a cross-reference inconsistency in the flexible working policy, those two signals together point to one root cause. That connection is only visible if someone is reading all four reports.

Exercise: Quarterly Analytics Review and Agent Intelligence Audit

Type: Applied Practice Time: 30 minutes Plugin commands: /people-report, /recruiting-pipeline Goal: Generate a quarterly people report, review a simulated KB agent weekly report, and identify the signals that would require HR action this week

Step 1 — Generate a People Report

Use the following scenario data (or substitute your own organisation's data) to generate a quarterly people report:

/people-report quarterly headcount and attrition

Organisation: [Your company, or use: "EdTech company, 200 employees, one office"]
Reporting period: Last quarter
Key data:
- Start-of-quarter headcount: [number]
- New hires: [number and which teams]
- Voluntary departures: [number, which teams, which levels if known]
- eNPS score: [if available, or omit]
- Any notable events: [e.g., new policy launch, office move, team restructure]

Review the output. Identify:

  • The single most significant attrition finding
  • One recommendation you would act on this quarter

Step 2 — Audit a KB Agent Weekly Report

Review the sample KB agent report in this lesson. Identify:

  1. Two FAQ gaps: Questions appearing in the report that do not have a documented FAQ entry
  2. One escalation pattern: A cluster of escalations pointing to a common root cause
  3. One accuracy concern: A query type where the agent's response may be outdated or incomplete

Write three specific actions HR should take this week based on the report.

Step 3 — Design Your HR Intelligence Dashboard

On paper or in a document, design your HR intelligence dashboard. For each metric you would track:

  • Which agent or tool generates it
  • Whether you would review it weekly or monthly
  • What threshold would trigger an HR action
  • Who is responsible for responding to that trigger

Aim for six to eight metrics that together give you a complete picture of HR function health.

Deliverable: A quarterly people report output, a three-action KB agent response memo, and a six-to-eight metric HR intelligence dashboard design.

Try With AI

Try With AI

Use these prompts in Cowork or your preferred AI assistant.

Reproduce: Generate an attrition report and identify the most significant finding.

Generate a quarterly people analytics report with a focus on attrition.

Organisation: 200-person EdTech company, one office location
Reporting period: Q1 (three months)
Data:
- Start headcount: 190, end headcount: 200
- Voluntary departures: 5 (2 from Engineering at Senior level, 1 from
Sales, 1 from Customer Success, 1 from Operations)
- Involuntary departures: 2
- Average tenure at exit (voluntary): 2.1 years
- New hires: 17 (across all teams)
- eNPS: +18 (from mid-quarter survey, 72% participation)

Produce: executive summary with the three most important findings,
a key metrics table, a detailed analysis of the most significant attrition
pattern, and two specific HR recommendations.

What you are learning: A people report is only useful if it surfaces a priority. This prompt practises reading for signal — the executive summary and detailed analysis should tell you exactly what to investigate and why.

Adapt: Analyse recruiting pipeline friction for a role in your own organisation.

I need to diagnose friction in our recruiting pipeline for [role title].

Pipeline data (from job opening to today, [X] days):
[Stage]: [count] candidates, [X] days average time in stage
[Stage]: [count] candidates, [X] days average time in stage
[add stages as needed]

Our time-to-fill target: [X] days

Identify:
1. Which stage has the highest time-in-stage relative to a healthy target
2. What that delay typically signals about what is broken in the process
3. Two specific actions to improve conversion or reduce time at the
bottleneck stage
4. Whether our current velocity will hit the time-to-fill target

What you are learning: Time-in-stage is the recruiting pipeline's most diagnostic metric. The stage where candidates are stuck longest points to the specific process failure — not just that the process is slow.

Apply: Design a weekly HR signal review using all four agent outputs.

I want to design a weekly HR signal review process that uses the outputs
from four persistent HR agents.

The four agents and what they generate:
1. Knowledge Base Agent: weekly query summary (volume, topics, escalations)
2. Onboarding Orchestrator: cohort completion reports (pre-boarding tasks,
Day 1 readiness, 30-day survey scores)
3. Policy Maintenance Agent: monthly policy currency and consistency audit
4. Offboarding Knowledge Agent: per-departure knowledge capture completion

Design a weekly review process for an HR team of three:
- Which metrics to review weekly vs monthly
- How to identify cross-agent signals (e.g., KB spike + policy change)
- What action threshold triggers an immediate response vs a quarterly review
- How to document findings and track whether actions were taken

Output as a structured protocol the team could run in 30 minutes each Monday.

What you are learning: The four agents together form an intelligence system, not four separate tools. Designing a cross-agent review process forces you to think about what patterns only become visible when you read all four outputs together.

Flashcards Study Aid


Continue to Lesson 14: Capstone — The Full Employee Lifecycle →