Skip to content

Risk & Governance Track

By the end of this track you will be able to

  • Explain why multi-agent AI systems create governance gaps that existing frameworks do not cover
  • Map the Phantom Compliance scenario to board-level risk categories and regulatory obligations
  • Articulate epistemic integrity as a governance obligation, not just a technical property
  • Identify which MASO control domains carry governance and accountability implications
  • Define the evidence standard required to demonstrate operational assurance to a regulator
  • Make defensible governance decisions about agent chain deployments under ambiguity

Your thread

As a risk or governance professional, your job is not to design the controls; it is to ensure that controls exist, that they are adequate, and that the organisation can demonstrate their adequacy to regulators, auditors, and the board.

The challenge with multi-agent AI systems is that the governance frameworks you rely on were designed for a world where a human or a single model produces an output. When agents delegate to agents, and each agent acts on data assembled by another agent, the accountability chain fragments in ways that existing frameworks do not anticipate.

This track gives you the regulatory and accountability story. It follows the same scenario (Phantom Compliance) but frames every failure through the lens of governance, oversight, and evidence.


The golden thread

Threat model (why existing AI governance does not cover agent chains) to MASO as the extension layer (what oversight obligations change when agents delegate to agents).

Every module in this track follows a five-beat structure:

  1. What goes wrong (scenario-driven)
  2. Why current governance does not catch it (the gap)
  3. What epistemic integrity means in this context (the concept)
  4. Which MASO controls address it (the framework)
  5. How to verify it is working (the evidence)

Modules

Module Focus Time
1. What Goes Wrong The Phantom Compliance scenario through a governance lens: accountability gaps and board-level risk ~20 min
2. Why Governance Misses It Walking through NIST AI RMF, ISO 42001, and the EU AI Act, identifying where each framework falls short on agent chains ~25 min
3. Epistemic Integrity Epistemic integrity as a governance obligation: fiduciary duty, regulatory compliance, and accountability ~20 min
4. MASO Controls Governance-relevant MASO domains and how they extend existing frameworks ~25 min
5. Oversight & Evidence What evidence you need for regulators, the board, and auditors, and the difference between compliance theatre and operational assurance ~25 min
Decision Exercise A governance decision under ambiguity: regulatory pressure, business pressure, and incomplete information ~15 min

Total track time: Approximately 2 hours and 10 minutes.


Who this track is for

This track is designed for:

  • Chief Risk Officers and risk teams responsible for AI risk within existing enterprise risk management frameworks
  • Compliance officers mapping AI systems to regulatory obligations
  • Internal auditors evaluating AI governance controls
  • Board members and governance committees who need to understand what questions to ask about AI agent deployments
  • Legal and regulatory affairs professionals interpreting AI-specific regulation for multi-agent systems

You do not need a technical background to complete this track. Where technical concepts appear, they are explained in governance terms.


Prerequisites

Before starting this track, complete the Phantom Compliance scenario. It takes approximately 15 minutes and provides the shared context that all tracks build on.


How this track connects

After completing this track, you will move to the Convergence Exercise where risk, security, and engineering perspectives come together around a shared decision. The convergence exercise requires at least two tracks to be completed and is most valuable when you bring the governance lens to a table that includes technical perspectives.


Start Module 1 →