Foundations Track¶
By the end of this track you will be able to¶
- Explain why AI systems require fundamentally different security approaches to traditional software
- Describe how LLMs generate outputs and where that process introduces risk
- Identify the core failure modes that affect AI systems in production, including hallucination, prompt injection, and drift
- Articulate why traditional security controls consistently miss AI-specific failures
- Define what runtime security means and why it matters for AI deployments
- Evaluate whether your organisation's current security posture accounts for AI-specific risks
Your thread¶
This track gives you the mental model you need before diving into the scenario and role-based tracks. It covers the basics: why AI systems behave differently from traditional software, how LLMs work (enough to understand the risks), where things go wrong, and what runtime security actually means. If you already have a solid grasp of LLM mechanics and AI failure modes, you can skip ahead to the role-based tracks for Security Architects, Risk & Governance, or Engineering Leads.
If you're newer to this space, or you want to make sure you're not carrying assumptions from traditional security into AI, start here. The modules are short, and they'll save you time later.
| Module | Focus |
|---|---|
| 1. Why AI Systems Are Different | What makes AI systems fundamentally unlike the software you already know how to secure |
| 2. How LLMs Actually Work | A practical explanation of how LLMs generate outputs, without the academic overhead |
| 3. Where Things Go Wrong | Core failure modes: hallucination, injection, drift, and the risks that show up in production |
| 4. Why Traditional Security Falls Short | The specific gaps in conventional security controls when applied to AI systems |
| 5. The Case for Runtime Security | What runtime security is, why it matters, and how it addresses the gaps |
| Self-Assessment | Check your understanding before moving on to the role-based tracks |