EU AI Act Crosswalk¶
This document maps the AI Runtime Behaviour Security controls to EU AI Act requirements.
Overview¶
The EU AI Act establishes requirements for AI systems based on risk classification. This crosswalk focuses on high-risk AI systems (Annex III), which face the most stringent requirements.
Key principle: The framework's control model—guardrails prevent, Judge detects, humans decide—aligns well with the EU AI Act's emphasis on human oversight and risk management.
EU AI Act Risk Categories¶
| Category | Description | Framework Alignment |
|---|---|---|
| Unacceptable Risk | Prohibited AI practices | Out of scope (don't build these) |
| High Risk | Annex III systems (credit, employment, etc.) | CRITICAL tier |
| Limited Risk | Transparency obligations | HIGH/MEDIUM tier |
| Minimal Risk | No specific requirements | LOW tier |
High-Risk AI System Requirements¶
Article 9: Risk Management System¶
Requirement: Establish, implement, document, and maintain a risk management system.
| EU AI Act Requirement | Framework Control | Implementation |
|---|---|---|
| Identify and analyse known/foreseeable risks | AI.2.1 Risk Classification | Risk assessment before deployment |
| Estimate and evaluate risks | AI.2.2 Risk Assessment | Document risk factors and scoring |
| Evaluate risks from intended use | AI.2.2 Risk Assessment | Use case analysis in classification |
| Adopt risk management measures | AI.7, AI.8, AI.9 | Guardrails, Judge, HITL |
| Test risk management measures | AI.4.2 Testing | Pre-deployment and ongoing testing |
| Ongoing risk management | AI.2.3 Ongoing Risk Monitoring | Judge monitoring, drift detection |
How the framework satisfies this:
- Risk identification: Risk classification matrix covers key factors
- Risk mitigation: Three-layer control model (guardrails, Judge, HITL)
- Testing: Golden set testing, adversarial testing, bias testing
- Ongoing management: Async Judge monitoring, HITL feedback loops
Article 10: Data and Data Governance¶
Requirement: High-risk AI systems using training data shall be developed on the basis of training, validation, and testing datasets that meet quality criteria.
| EU AI Act Requirement | Framework Control | Implementation |
|---|---|---|
| Relevant, representative, free of errors | AI.5.2 Data Quality | Training data validation |
| Appropriate statistical properties | AI.5.2 Data Quality | Data quality metrics |
| Account for specific settings | AI.5.2 Data Quality | Context-appropriate data |
| Examine for biases | AI.6.2 Model Validation | Bias testing |
How the framework satisfies this:
- Data governance: AI.5 Data Governance control family
- Quality assurance: Judge can evaluate for quality issues
- Bias detection: Bias monitoring as part of Judge evaluation (CRITICAL tier)
Article 11: Technical Documentation¶
Requirement: Technical documentation shall be drawn up before placing on the market and kept up to date.
| EU AI Act Requirement | Framework Control | Implementation |
|---|---|---|
| General description | AI.3.2 System Documentation | System documentation package |
| Detailed description of elements | AI.3.2 System Documentation | Architecture, data flows |
| Monitoring, functioning, control | AI.11 Logging & Monitoring | Monitoring documentation |
| Risk management system description | AI.2 Risk Management | Risk assessment records |
| Description of changes | AI.3.2 System Documentation | Change documentation |
How the framework satisfies this:
- Documentation requirements scale by tier: CRITICAL requires full SR 11-7-style documentation
- Inventory and documentation: AI.3 control family
- Change management: Part of AI.4 Development Security
Article 12: Record-Keeping¶
Requirement: High-risk AI systems shall technically allow for automatic recording of events (logs).
| EU AI Act Requirement | Framework Control | Implementation |
|---|---|---|
| Recording period of operation | AI.11.1 Comprehensive Logging | Full interaction logging |
| Reference database against which checked | AI.11.1 Comprehensive Logging | Context and source logging |
| Traceability of functioning | AI.11.1 Comprehensive Logging | Audit trail |
How the framework satisfies this:
- Comprehensive logging: CRITICAL tier requires full content logging with 7-year retention
- Traceability: Correlation IDs, timestamps, full context
- Judge evaluation logs: Additional assurance documentation
Article 13: Transparency and Provision of Information to Deployers¶
Requirement: High-risk AI systems shall be designed to ensure their operation is sufficiently transparent.
| EU AI Act Requirement | Framework Control | Implementation |
|---|---|---|
| Understand output and use appropriately | AI.9.1 Human-in-the-Loop | HITL ensures understanding |
| Characteristics, capabilities, limitations | AI.3.2 System Documentation | Documentation package |
| Intended purpose | AI.3.1 AI System Inventory | Inventory records |
| Level of accuracy, robustness, cybersecurity | AI.6.2 Model Validation | Validation reports |
How the framework satisfies this:
- Transparency to users: System prompts, UI design, documentation
- Transparency to oversight: Judge reasoning, HITL context
Article 14: Human Oversight¶
Requirement: High-risk AI systems shall be designed to allow for effective human oversight during use.
| EU AI Act Requirement | Framework Control | Implementation |
|---|---|---|
| Properly understand capabilities and limitations | AI.9.1 HITL | Training, documentation |
| Remain aware of automation bias | AI.9.1 HITL | Reviewer independence |
| Correctly interpret output | AI.9.1 HITL | Context in review interface |
| Decide not to use in any situation | AI.9.3 Human Override | Override capability |
| Intervene or interrupt | AI.9.3 Human Override | Stop capability |
How the framework satisfies this:
This is where the framework's control model is specifically designed to comply:
- Guardrails prevent — Real-time protection, humans can override/disable
- Judge detects — Surfaces issues for human review, does NOT make decisions
- Humans decide — HITL reviews findings, makes all consequential decisions
Critical alignment: - The Judge is explicitly NOT a decision-maker - Humans remain accountable for all outcomes - Override capability is mandatory - CRITICAL tier requires human decision on all consequential actions
This avoids GDPR Article 22 concerns about solely automated decision-making by ensuring meaningful human involvement.
Article 15: Accuracy, Robustness, and Cybersecurity¶
Requirement: High-risk AI systems shall be designed to achieve appropriate levels of accuracy, robustness, and cybersecurity.
| EU AI Act Requirement | Framework Control | Implementation |
|---|---|---|
| Appropriate levels of accuracy | AI.6.2 Model Validation | Accuracy metrics, validation |
| Resilient against errors, faults, inconsistencies | AI.6.3 Model Monitoring | Drift detection, anomaly monitoring |
| Resilient against attempts to alter use/performance | AI.7 Guardrails | Input validation, injection prevention |
| Appropriate cybersecurity measures | AI.4, AI.6.1 | Secure development, model protection |
How the framework satisfies this:
- Accuracy: Validation, Judge quality monitoring
- Robustness: Guardrails protect against malformed input
- Security: Full control framework including injection prevention
Control Mapping Summary¶
| EU AI Act Article | Primary Framework Controls |
|---|---|
| Art. 9 Risk Management | AI.2 Risk Management |
| Art. 10 Data Governance | AI.5 Data Governance |
| Art. 11 Documentation | AI.3 Inventory & Documentation |
| Art. 12 Record-Keeping | AI.11 Logging & Monitoring |
| Art. 13 Transparency | AI.3, AI.9 |
| Art. 14 Human Oversight | AI.9 Human Oversight |
| Art. 15 Accuracy/Security | AI.6, AI.7 |
Key Alignment Points¶
Human Oversight (Article 14)¶
The framework's three-layer model directly supports Article 14:
Risk Management (Article 9)¶
The framework provides comprehensive risk management:
Evidence Package for Regulators¶
When demonstrating EU AI Act compliance, provide the evidence below. The Audit Evidence Matrix consolidates all evidence requirements into a single audit-ready view. The per-article tables below provide additional detail.
Audit Evidence Matrix¶
This is the table your auditor needs. One row per control obligation, with the specific artefact, who produces it, and how to verify it.
| EU AI Act Ref | Control Layer | Risk Type | Framework Control | Evidence Artefact | Produced By | Verification Method |
|---|---|---|---|---|---|---|
| Art. 9(2)(a) | Risk Management | Risk identification | AI.2.1 Risk Classification | Risk tier scoring record (six dimensions, signed off) | Product owner + risk function | Review scoring against tier criteria; verify governance approval |
| Art. 9(2)(b) | Risk Management | Risk estimation | AI.2.2 Risk Assessment | Quantified risk assessment (inherent → residual per threat) | Risk function | Verify calculation methodology matches framework; check recalibration date |
| Art. 9(4) | Guardrails | Risk mitigation | AI.7 Guardrails | Guardrail rule configuration export + red team results | Engineering + security | Run red team scenarios; compare block rate to documented effectiveness |
| Art. 9(4) | AI Evaluation | Risk mitigation | AI.8 Judge | Judge evaluation criteria + accuracy measurement report | Engineering + risk function | Compare Judge accuracy against labelled evaluation dataset (≥95% target) |
| Art. 9(4) | Human Oversight | Risk mitigation | AI.9 HITL | HITL sampling configuration + reviewer agreement study | Operations | Verify sampling rates match tier; review inter-rater reliability scores |
| Art. 9(8) | All | Ongoing management | AI.2.3 Ongoing Risk Monitoring | Quarterly risk recalibration report | Risk function | Verify effectiveness rates updated with latest red team/Judge/HITL data |
| Art. 12(1) | Logging | Record-keeping | AI.11.1 Logging | Log configuration showing required fields (LOG-01) | Engineering | Verify all 14 required fields present; sample 100 records |
| Art. 12(2) | Logging | Traceability | AI.11.1 Logging | Sample log export with correlation IDs | Engineering | Trace one transaction end-to-end: input → guardrail → model → Judge → output |
| Art. 12 | Logging | Retention | AI.11.1 Logging | Retention policy document + verification evidence | IT operations | Verify oldest retained log meets retention period (7 years for CRITICAL) |
| Art. 14(1) | Human Oversight | Human oversight design | AI.9.1 HITL | HITL operating procedure + review interface screenshots | Operations | Walk through one review cycle; verify reviewer has full context |
| Art. 14(3)(a) | Human Oversight | Understanding capability | AI.9.1 HITL | Reviewer training records + competency assessment | Operations / HR | Verify training covers system capabilities, limitations, and automation bias |
| Art. 14(4)(a) | Human Oversight | Override capability | AI.9.3 Human Override | Override mechanism documentation + test evidence | Engineering | Execute override; verify system responds within SLA |
| Art. 14(4)(b) | Circuit Breaker | Stop capability | PACE Emergency | Emergency stop procedure + last drill record | Engineering + operations | Verify last drill within 90 days; review restoration time |
| Art. 15(1) | AI Evaluation | Accuracy | AI.6.2 Validation | Model validation report + Judge accuracy metrics | Engineering + risk function | Compare stated accuracy to validation dataset results |
| Art. 15(3) | Guardrails | Cybersecurity | AI.7 Guardrails | Guardrail test results (injection, data leakage) | Security | Run OWASP LLM Top 10 test cases; verify block rates |
| Art. 15(4) | All | Resilience | PACE Resilience | PACE state definitions + last failover drill record | Engineering + operations | Verify each PACE state is defined; review drill results and restoration times |
Article 9 Evidence (Detail)¶
| Evidence | Source |
|---|---|
| Risk assessment methodology | Documented methodology |
| Risk classification for system | Classification record |
| Control selection rationale | Mapping to risk tier |
| Test results | Validation reports, golden set results |
| Ongoing monitoring reports | Judge summaries, drift reports |
Article 14 Evidence (Detail)¶
| Evidence | Source |
|---|---|
| HITL process documentation | Operating procedures |
| HITL coverage by tier | Configuration records |
| Override capability | System documentation, logs |
| HITL decision records | Review logs with decisions |
| Training records | Staff training documentation |
Article 12 Evidence (Detail)¶
| Evidence | Source |
|---|---|
| Logging configuration | Technical documentation |
| Sample logs | Log exports |
| Retention compliance | Retention policy, verification |
| Tamper-evidence | Log integrity verification |
GDPR Article 22 Alignment¶
GDPR Article 22 gives data subjects the right not to be subject to solely automated decisions with legal or significant effects.
How the framework satisfies this:
| GDPR Requirement | Framework Implementation |
|---|---|
| Not solely automated | HITL for all CRITICAL decisions |
| Meaningful human involvement | Humans decide, not rubber-stamp |
| Right to human intervention | Override capability |
| Right to explanation | Judge reasoning + human decision trail |
The Judge's role is critical here: - Judge does NOT make decisions - Judge surfaces findings for humans - Humans make the decision - Therefore, decision is not "solely automated"
Implementation Checklist¶
For High-Risk AI Systems (CRITICAL Tier)¶
- Risk management system documented (Art. 9)
- Data governance documented (Art. 10)
- Technical documentation complete (Art. 11)
- Logging implemented with appropriate retention (Art. 12)
- Transparency requirements met (Art. 13)
- Human oversight implemented (Art. 14)
- 100% HITL for consequential decisions
- Override capability functional
- Humans trained and accountable
- Accuracy and security validated (Art. 15)
- Guardrails deployed and tested
- Judge deployed with 100% sampling
- HITL processes operational
- Feedback loops active
Limitations¶
This crosswalk provides guidance, not legal advice. Key limitations:
- Interpretation may vary: Regulatory interpretation is still evolving
- Technical standards pending: EU is developing harmonised standards
- Context matters: Specific implementation depends on your use case
- Legal advice required: Consult legal counsel for your situation
AI Runtime Behaviour Security, 2026 (Jonathan Gill).