Domain 5 -- Governance, Risk and Compliance Advisory
"The question is not whether AI can test controls faster than a human. It can. The question is whether the controls being tested are the right controls -- and that is an advisory judgment no agent can make alone."
In Lesson 5, you examined how AI transforms management accounting by shifting professionals from model maintenance to business partnering. Now you move to the final domain in our five-domain survey: the broadest, most heterogeneous, and -- for advisory professionals -- the most resilient.
Governance, Risk and Compliance (GRC) advisory encompasses governance structures, risk management frameworks, internal control design, and regulatory compliance management. It is the domain where the CA/CPA profession's advisory judgment is most directly the product being sold. A governance advisory engagement does not apply a standard process to standard inputs -- it assesses what a specific organisation needs, given its industry, regulatory environment, risk appetite, and board expectations. This makes GRC the domain least directly threatened by automation in the short term.
But "least threatened" does not mean "unchanged." The monitoring and testing components of GRC work are highly automatable, and the AI platforms that are transforming continuous monitoring are already in production. The professional shift is not from working to not working -- it is from testing controls periodically to overseeing AI agents that monitor controls continuously. That is a fundamentally different role requiring a fundamentally different set of skills.
What This Domain Covers
GRC advisory is the broadest of the five CA/CPA domains, spanning four distinct sub-disciplines:
| Sub-Discipline | What It Produces | AI Impact |
|---|---|---|
| Governance Advisory | Board governance frameworks, audit committee guidance, governance best practice recommendations | Low -- highly contextual, organisation-specific advisory |
| Risk Management | Enterprise risk frameworks, risk registers, risk appetite statements | Moderate -- risk identification and register maintenance automatable; framework design and appetite calibration require judgment |
| Internal Controls | Control design, control testing, control remediation | Moderate-High -- testing and monitoring highly automatable; control design requires understanding of business processes |
| Compliance Management | Regulatory compliance monitoring, breach identification, remediation workflows | Moderate-High -- monitoring and reporting highly automatable; interpreting regulatory requirements in context requires judgment |
The Three Lines Model (updated by the Institute of Internal Auditors in 2020 from "Three Lines of Defence") is the standard governance framework for organising risk and control responsibilities in an organisation.
First Line -- Management. The business units and operational functions that own and manage risk on a day-to-day basis. They implement controls and are responsible for identifying and managing risks within their own operations.
Second Line -- Oversight Functions. The risk management and compliance functions that set policy, define the risk appetite, and monitor whether the first line is managing risk effectively. They provide oversight and challenge, but do not own operational risks.
Third Line -- Independent Assurance. Internal audit, which provides independent assurance to the board and audit committee that the first and second lines are functioning effectively.
Where AI transforms each line:
- First Line: AI embedded in operational systems (ERP agents, process automation) transforms how business units execute controls
- Second Line: AI monitoring agents continuously check whether first-line controls are operating effectively -- this is the most immediately transformative capability
- Third Line: AI-driven transaction testing and anomaly detection transforms internal audit from periodic sampling to continuous assurance
The advisory CA/CPA role sits primarily in the second and third lines -- designing the frameworks, interpreting the findings, and advising management and the board.
Gen-AI Capabilities Available Now
Three GRC workflows are already well-served by Gen-AI tools.
Policy drafting. Governance and compliance policy documents -- risk appetite statements, internal control frameworks, compliance procedures, board governance policies -- follow structured templates and require the application of best practice guidance to the organisation's specific context. Gen-AI tools can draft these documents from templates, applying the relevant regulatory requirements and governance codes to the organisation's circumstances. The GRC professional reviews, ensures the policy reflects the organisation's actual risk appetite and operating context, and refines the language for the intended audience (board, management, operational staff).
Risk assessment. Enterprise risk assessment processes -- identifying risks, assessing their likelihood and impact, and producing a structured risk register -- are information synthesis tasks well suited to Gen-AI. The AI can gather information about the organisation's business and its regulatory environment, apply standard risk frameworks, and produce a structured risk register for management review. The professional validates the risk identification (has the AI missed industry-specific risks?), calibrates the assessments (are the likelihood and impact ratings appropriate?), and connects the register to the organisation's risk appetite framework.
Compliance reporting. Preparing compliance reports for regulators, boards, and audit committees -- summarising the compliance position, identifying breaches, and documenting remediation actions -- is documentation-intensive work amenable to AI assistance. The professional ensures the report accurately represents the compliance position and that the remediation actions are appropriate and achievable.
Agentic AI Capabilities Approaching Production
Two agentic capabilities represent the most significant near-term transformation in GRC.
Continuous controls monitoring agent. This agent monitors transactions, process executions, and system events in real time, applying the control framework to identify exceptions, anomalies, and potential control failures. Rather than testing controls periodically (as internal audit traditionally does), it monitors continuously and generates alerts when controls appear to have failed. This is the most transformative agentic capability in the GRC domain because it changes the fundamental operating model from periodic assurance to continuous assurance.
Autonomous compliance agent. This agent tracks regulatory obligations, monitors the organisation's compliance position against each obligation, identifies gaps and potential breaches, and triggers remediation workflows -- all autonomously. The compliance professional shifts from manually tracking obligations to overseeing an agent that tracks them continuously.
Real-World Deployments
Two platforms illustrate the current state of AI in GRC.
ServiceNow AI Agents (servicenow.com/products/governance-risk-and-compliance.html) represent one of the closest current implementations of agentic GRC. ServiceNow AI agents autonomously monitor transactions, identify incidents, open cases, and initiate investigation workflows. They continuously evaluate controls against policy baselines and live operational signals, automatically create and route issues when deviations are detected, and trigger remediation playbooks. At Knowledge 2025, ServiceNow launched AI Control Tower -- a centralised command for governing AI agents across the enterprise.
IBM watsonx.governance (ibm.com/products/watsonx-governance) enables automated monitoring, compliance analysis, and governance workflows. IBM was named a Leader in the 2025 IDC MarketScape for Unified AI Governance Platforms. The platform monitors AI models for fairness, bias, and drift, with compliance accelerators covering the EU AI Act, ISO 42001, and NIST AI RMF. In early 2026, watsonx.governance is introducing governance of AI agents themselves -- monitoring agent decisions, behaviours, and performance in production and triggering alerts when thresholds are breached.
- ServiceNow AI Agents for GRC: servicenow.com/products/governance-risk-and-compliance.html -- Continuous controls monitoring in production
- IBM watsonx.governance: ibm.com/products/watsonx-governance -- AI governance and compliance monitoring
GRC: The Advisory Resilience Domain
GRC advisory roles focused on manual compliance testing and periodic control assessments face the most significant change. The shift is from testing controls periodically to overseeing AI agents that monitor controls continuously -- a fundamentally different role.
But the opportunity is the advisory layer above continuous monitoring:
| From (Periodic) | To (Continuous) |
|---|---|
| Testing a sample of transactions quarterly | Overseeing an agent that monitors all transactions continuously |
| Producing a compliance report annually | Reviewing real-time compliance dashboards and investigating alerts |
| Designing controls on paper and testing them later | Designing controls with monitoring specifications built in from day one |
| Advising the board once per year on risk position | Advising the board continuously as the risk position changes |
The professional skills that become more valuable in this model are: interpreting what the monitoring data means, designing the monitoring programme that makes the agent effective, advising management and the board on the implications of what the agents are finding, and responding when agents identify significant issues.
COSO Framework (US origin, global adoption): The Committee of Sponsoring Organizations' 2013 Internal Control -- Integrated Framework provides the five-component model (Control Environment, Risk Assessment, Control Activities, Information and Communication, Monitoring Activities) used worldwide. AI continuous monitoring most directly transforms the Monitoring Activities component, but the Control Environment (tone at the top, governance culture) remains a human governance responsibility.
UK Corporate Governance Code: The UK Code (issued by the Financial Reporting Council) requires listed companies to maintain sound risk management and internal control systems, with annual board review. The shift to continuous AI monitoring changes how boards discharge this responsibility -- from reviewing periodic reports to overseeing continuous monitoring programmes.
King IV (South Africa): The King IV Code on Corporate Governance emphasises integrated thinking and stakeholder inclusivity. Its technology governance principles (Principle 12: governing technology and information) are directly relevant to AI agent governance -- including governing the AI agents that govern the organisation's risk and compliance.
Pakistan (SECP Code of Corporate Governance): The SECP Code requires listed companies to establish audit committees, internal audit functions, and risk management frameworks. The Pakistan Institute of Corporate Governance (PICG) provides governance training and advisory. AI monitoring tools must be deployed within the SECP's governance requirements.
Practice Exercise 5: Continuous Controls Monitoring Specification (25 min)
What you will build: A SKILL.md specification for a continuous controls monitoring agent covering three financial controls.
Requirements: Claude (any interface). Knowledge of any organisation's key financial controls. If you need a ready-made entity, download the exercise data zip and open exercises/entity-profiles/crescent-textiles.md — it includes key risk areas and regulatory obligations.
-
Choose three financial controls for a specific organisation type (e.g., a bank, a retail company, a textile manufacturer). Ask Claude: "For each control, specify: (a) what the control is designed to prevent, (b) what data would evidence that the control has been executed, (c) what anomaly would indicate the control may have failed, and (d) what the monitoring agent should do when it detects that anomaly."
-
Ask: "Design a continuous monitoring programme for these three controls. What is the monitoring frequency? What are the escalation thresholds? What actions should the agent take autonomously, and what should it escalate to a human?"
-
Ask: "Write this monitoring programme as a SKILL.md specification for an autonomous GRC monitoring agent. Include: the control objectives, the monitoring rules, the anomaly detection thresholds, and the escalation routing."
-
Ask: "In the Three Lines Model, where does this monitoring agent sit -- first, second, or third line? What are the implications for the human roles in each line if this monitoring becomes continuous?"
Check your work: You should have (a) three controls with complete specifications (prevention objective, evidence data, failure anomaly, agent response), (b) a monitoring programme with frequencies and escalation thresholds, (c) a SKILL.md specification encoding the programme, and (d) a governance analysis placing the agent in the Three Lines Model.
The key learning: The discipline being built is control design thinking -- specifying precisely what a control is supposed to prevent, what evidence would show it has worked, and what anomaly would reveal it has failed. This is the skill the GRC professional must develop as AI takes over the testing: designing the monitoring programme that makes the agent effective.
Try With AI
Use these prompts in Cowork or your preferred AI assistant to explore this lesson's concepts.
Prompt 1: Control Design Thinking
I work in [YOUR INDUSTRY -- e.g., banking, manufacturing, retail,
professional services] in Pakistan.
Choose three key financial controls relevant to my industry. For each
control, specify:
1. Prevention objective -- what risk does this control mitigate?
2. Evidence -- what data proves the control was executed?
3. Failure signal -- what anomaly suggests the control has failed?
4. Agent response -- what should a monitoring agent do when it
detects the failure signal?
Then explain: if this monitoring runs continuously (not quarterly),
how does the role of the internal audit team change?
Present your answer as a table with one row per control.
What you are learning: Control design thinking is the core professional skill in AI-augmented GRC. By specifying controls with the precision required for an AI agent to monitor them, you are learning to think like a monitoring programme designer -- not just a control tester. The question about internal audit role change connects the technical specification to the professional transformation.
Prompt 2: GRC Agent Governance
An organisation deploys a continuous controls monitoring agent that
monitors all financial transactions in real time. The agent:
- Flags transactions exceeding PKR 5 million without dual approval
- Detects patterns suggesting potential fraud (unusual timing,
round-number transactions, new payees above threshold)
- Generates weekly compliance summary reports
- Escalates anomalies scoring above 0.8 risk threshold to the
compliance officer
Using the Three Lines Model:
1. Which line does this agent operate in? Justify your answer.
2. What human roles are needed in each line to make this agent
effective?
3. What happens if the agent generates too many false positives?
Who decides to adjust the thresholds?
4. Who governs the agent itself -- ensuring it is monitoring the
right controls with appropriate thresholds?
Frame your answer for a board audit committee presentation.
What you are learning: Deploying an AI monitoring agent is not just a technology decision -- it is a governance decision. By placing the agent in the Three Lines Model and identifying the human roles around it, you are learning that continuous monitoring creates new governance responsibilities (who governs the AI that governs the controls?) rather than eliminating governance work.
Prompt 3: Advisory Resilience Assessment
I am a [YOUR GRC ROLE -- e.g., internal auditor, compliance officer,
risk manager, governance advisor] at a [COMPANY TYPE] in [COUNTRY].
Map my current work across these categories:
1. Manual testing and data gathering (mechanical)
2. Report writing and documentation (semi-mechanical)
3. Analysis, interpretation, and investigation (judgment-intensive)
4. Advisory -- advising management/board on implications (high judgment)
For each category:
- Estimate my current time percentage
- Estimate the change with continuous AI monitoring deployed
- Identify which tasks disappear, which transform, and which become
more important
Then answer: what new skills do I need to develop to thrive in the
continuous monitoring model? Be specific -- not "learn about AI" but
concrete professional skills like "monitoring programme design" or
"threshold calibration."
What you are learning: GRC is described as the domain where advisory judgment is most resilient -- but resilience is not automatic. By mapping your specific role against the automation spectrum, you identify which parts of your current work face displacement (manual testing, routine reporting) and which new skills you need to develop (monitoring programme design, agent governance, threshold calibration) to remain valuable in the continuous monitoring model.
Flashcards Study Aid
Continue to Lesson 7: The CA/CPA Plugin Ecosystem -->