Skip to content

Start Here

Before you learn any controls, you need to feel the problem they solve.

The single biggest failure mode in AI runtime security isn't a missing control; it's a missing mental model. People deploy guardrails without understanding what they're guarding against. They implement monitoring without knowing what constitutes a real signal versus noise.

This training fixes that by starting where the real problems start: in production, with agents acting on other agents' outputs, and no one verifying that the reasoning chain is sound.


Your learning path

Learning path: scenario splits into three role tracks, then converges


Begin

Everyone begins with the same scenario. It takes about 15 minutes and sets up everything that follows.

Enter the scenario →


How to get the most from this training

  • New to AI risks? Start with the optional Foundations track for a primer on how LLMs work and why AI fails differently. You can skip it if you're already comfortable with the basics.
  • Don't skip the scenario. The tracks assume you've internalised it.
  • Pick one track that matches your role. You can explore the others later.
  • Do the decision exercises. Reading is not the same as deciding. The exercises are where the learning actually happens.
  • Do the convergence exercise with colleagues if you can. AI runtime security is a cross-functional problem, and the exercise is designed to surface that.