Skip to content

Choose Your Track

Each track follows the same five-beat structure but frames the material for your role. All tracks reference the same AIRS framework and MASO controls; the difference is what you need to do with them.

Prerequisite

All tracks assume you've completed The Scenario. If you haven't, start there.

New to AI security?

If you want to understand the fundamentals first (how LLMs work, why AI fails differently, why traditional security falls short), start with the Foundations track below. It's optional but recommended if you're new to AI-specific risks.


Foundations

5 modules + self-assessment · ~30 minutes

You'll learn: Why AI systems behave differently from traditional software, how LLMs generate outputs (and where that introduces risk), the core failure modes, and why existing security controls don't cover them.

The thread: How AI differs → LLM mechanics → failure modes → security gaps → the case for runtime security.

Begin →

Security Architects

5 modules + decision exercise · ~45 minutes

You'll learn: How the MASO control domains map to the AIRS three-layer architecture, where to place controls in multi-agent pipelines, and how to design verification that catches reasoning failures, not just output failures.

The thread: Threat model → MASO control domains → three-layer architecture → implementation patterns → verification evidence.

Begin →

Risk & Governance

5 modules + decision exercise · ~45 minutes

You'll learn: Why existing AI governance frameworks don't cover agent-to-agent delegation, how MASO extends governance to runtime behaviour, and what oversight obligations change when agents delegate to agents.

The thread: Threat model → governance gap → accountability in agent chains → MASO as extension layer → oversight evidence.

Begin →

Engineering Leads

5 modules + decision exercise · ~45 minutes

You'll learn: What actually breaks in production agent chains, which controls are runtime vs. design-time, what instrumentation to build, and how to make failures visible before they cause damage.

The thread: Threat model → production failure modes → runtime controls → instrumentation patterns → operational evidence.

Begin →


After your track

All three tracks converge in a cross-functional exercise where security architects, risk professionals, and engineers collaborate on an agent chain risk assessment. This mirrors how AI runtime security works in practice: it's never a single team's problem.