Skip to content

3. Epistemic Integrity

1What goes wrong
2Governance misses it
3Epistemic integrity
4MASO controls
5Oversight

After this module you will be able to

  • Define epistemic integrity in governance terms, not just technical terms
  • Connect epistemic integrity to fiduciary duty, regulatory compliance, and audit requirements
  • Explain the accountability question: when an agent acts on incomplete data, who is accountable
  • Articulate why epistemic integrity is a board-level obligation, not a technical implementation detail

What epistemic integrity means for governance

The AIRS framework defines epistemic integrity as the property that an agent's outputs are faithful to the data it actually accessed, meaning that what it claims to know is based on what it genuinely processed and verified.

For a security architect, this is a design principle. For a risk and governance professional, it is something more significant: it is a governance obligation.

Epistemic integrity as a governance obligation: When your organisation deploys an AI agent that makes or supports consequential decisions, you have an implicit obligation that the agent's conclusions are based on adequate data. If the agent's conclusions are based on incomplete, stale, or corrupted data, and your governance framework does not require verification of this, then the governance framework is inadequate.

This is not a new obligation. It is the same obligation you have always had for any decision-support system. What is new is that multi-agent architectures make it much harder to fulfil, and much easier to believe you have fulfilled it when you have not.


The fiduciary connection

Consider the Phantom Compliance scenario through the lens of fiduciary duty.

Meridian Capital has a fiduciary obligation to its clients. When it executes a trade, it must do so in accordance with regulatory requirements and client interests. The compliance pipeline exists to fulfil this obligation; it is the mechanism by which the firm ensures that trades meet regulatory standards before execution.

When Agent B checked a partial restricted securities list and reported "CLEAR," the firm's fiduciary obligation was not met. The compliance check was formally performed but substantively inadequate. The system produced a certificate of compliance that was not warranted by the data.

In governance terms, this is comparable to:

  • An auditor who reviews a sample of transactions but reports on the full population without noting the sampling limitation
  • A compliance officer who checks an outdated version of a regulation but certifies compliance with the current version
  • A due diligence process that reviews some but not all material documents and produces a clean report

In each case, the process was followed but the substance was inadequate. And in each case, the accountability question is about whether adequate controls existed to ensure substantive completeness, not just procedural completion.

The human analogy

Imagine a human compliance analyst at Meridian Capital. The analyst is asked to check whether Vertex Communications is on the restricted securities list. The analyst opens the list, but the PDF only loads the first 340 of 429 entries due to a software glitch. The analyst checks the loaded entries, finds no match, and reports "CLEAR."

When this failure is discovered, the investigation would ask:

  1. Did the analyst know the list was incomplete? (Awareness)
  2. Should the analyst have known? Was there a visible indicator? (Reasonable care)
  3. Is there a procedure requiring the analyst to verify list completeness? (Procedural control)
  4. Who is responsible for the procedure? (Accountability)

Now apply the same questions to Agent B. The answers reveal the governance gap: there was no procedure requiring verification of data completeness, because the governance framework did not recognise this as a risk.


Three dimensions of epistemic integrity for governance

As a governance professional, epistemic integrity operates in three dimensions:

Dimension 1: Data completeness

Did the agent access all the data it needed to make its determination? This is the most direct dimension, and the one Phantom Compliance violated.

Governance requirement: For any agent that makes or supports a consequential decision, the governance framework must define what constitutes a complete data basis for that decision, and require verification that the data basis was met.

What this looks like in practice:

  • For a compliance check: define the minimum data sources that must be consulted (sanctions list, restricted securities list, concentration limits), and require verification that each source was fully accessed
  • For a risk assessment: define the data inputs required (market data, counterparty data, historical data), and require verification of currency and completeness
  • For a clinical decision support query: define the literature databases and guideline sources that must be consulted, and require verification that retrieval was not truncated

Dimension 2: Reasoning transparency

Can a downstream agent (or a human reviewer) verify how the agent arrived at its conclusion? This goes beyond logging the output. It requires logging the reasoning basis: what data was accessed, what was considered, and how the conclusion was reached.

Governance requirement: For agents in a chain, each agent's output must be accompanied by sufficient metadata for a downstream consumer to assess whether the conclusion is warranted. This is the agent equivalent of "show your working."

What this looks like in practice:

  • Agent B's output should include not just "Compliance Status: CLEAR" but also retrieval metadata: how many entries were checked, against which version of the list, and with what completeness metric
  • This metadata should be machine-readable so that downstream agents can programmatically verify it, and human-readable so that auditors can review it

Dimension 3: Confidence calibration

Does the agent's stated confidence match the quality and completeness of its data? An agent that reports high confidence from incomplete data has poor epistemic integrity, even if its conclusion happens to be correct.

Governance requirement: Agents must not express confidence that exceeds what their data supports. When data is incomplete or uncertain, the agent must propagate that uncertainty rather than absorbing it.

What this looks like in practice:

  • An agent checking a partial list should report "Checked against 340 of 429 entries; no match found in checked portion" rather than "No match found"
  • Confidence scores should be calibrated against data completeness, not just output plausibility

The accountability question

When Agent B acts on incomplete data and produces a wrong compliance assessment, who is accountable?

This is not a rhetorical question. In the aftermath of a Phantom Compliance-style incident, a regulator will want a clear answer. Consider the candidates:

The agent itself?

No. Agents are not legal persons. They cannot be held accountable, disciplined, or sanctioned. Assigning accountability to an agent is a governance failure, not an accountability framework.

The agent's developer?

Partially. If the agent was designed without the capability to verify its data completeness, the developer bears responsibility for the design limitation. But in multi-agent systems, the developer of Agent B may not know that Agent B's output will be used as a definitive compliance determination by Agent C. The risk emerges from the orchestration, not from the individual agent.

The orchestration designer?

Partially. The person or team that designed the pipeline, deciding that Agent B's output feeds Agent C without independent verification, created the trust relationship that enabled the failure. But they may have followed standard architectural patterns and had no guidance from the governance framework that this trust relationship required specific controls.

The governance function?

This is where accountability ultimately lands. The governance function is responsible for ensuring that the organisation's risk management framework covers the risks the organisation actually faces. If the framework does not cover agent chain integrity, the governance function has a gap, and the incident is, at root, a governance failure.

The accountability chain for epistemic integrity:

  1. The governance function is accountable for requiring epistemic integrity as a governance standard
  2. The system owner is accountable for implementing controls that verify epistemic integrity in their specific system
  3. The orchestration designer is accountable for ensuring inter-agent interactions preserve epistemic integrity
  4. The operations team is accountable for monitoring epistemic integrity at runtime and escalating when it degrades

This is a layered accountability model, similar to how financial controls work. The CFO is not personally responsible for every journal entry, but they are responsible for the framework that ensures journal entries are correct.


Epistemic integrity and regulatory compliance

The connection between epistemic integrity and regulatory compliance is direct:

Financial services

Regulators expect firms to have adequate systems and controls for compliance. "Adequate" means the controls actually work, not just that they exist. An AI compliance system that checks partial data does not meet the standard of adequacy, regardless of how sophisticated the AI is.

Relevant regulatory expectations:

  • MiFID II (Article 16): Firms must establish adequate policies and procedures to ensure compliance with their obligations. An AI compliance system is a compliance procedure, and its adequacy depends on epistemic integrity.
  • SEC Rule 206(4)-7: Investment advisers must implement compliance policies reasonably designed to prevent violations. A compliance system that does not verify its own data completeness is not reasonably designed.
  • FCA Senior Managers Regime: Senior managers are personally accountable for the systems and controls in their area. If an AI system in their area has inadequate epistemic integrity, the senior manager's accountability is engaged.

Healthcare

Clinical decision support systems must provide recommendations based on adequate evidence. A system that retrieves partial clinical literature and makes recommendations based on incomplete evidence creates patient safety risk and regulatory exposure.

Any regulated industry

The principle is general: if a regulator requires adequate controls, and your AI system is one of those controls, then the AI system must demonstrate epistemic integrity. A control that runs but does not verify its own reasoning basis is not adequate.


Epistemic integrity and audit

For internal auditors evaluating AI systems, epistemic integrity provides a concrete audit criterion:

Audit question 1: Does the AI system verify the completeness of its reasoning inputs before producing output?

  • If yes: examine the verification mechanism. Is it adequate? Is it tested?
  • If no: finding. The system may produce conclusions from incomplete data.

Audit question 2: Does the AI system propagate uncertainty rather than absorbing it?

  • If yes: examine how uncertainty is communicated to downstream consumers (human or agent).
  • If no: finding. Downstream consumers may overestimate the reliability of the system's outputs.

Audit question 3: In a multi-agent chain, can each agent's conclusions be traced to the data it actually accessed?

  • If yes: examine the traceability mechanism. Is it complete? Is it tamper-resistant?
  • If no: finding. The chain cannot be audited for reasoning integrity.

Audit question 4: Is there a defined standard for what constitutes adequate data for each agent's decisions?

  • If yes: examine whether the standard is enforced at runtime.
  • If no: finding. There is no baseline against which to measure data adequacy.

Reflection

Think about the AI systems in your organisation that support or make consequential decisions. For each one, can you answer the question: "What data does this system need to make an adequate determination, and how do we verify at runtime that it has that data?"

If you cannot answer this question, you have an epistemic integrity gap, and a governance gap.

Consider

Start with the highest-risk AI system in your organisation. Map its data dependencies. For each dependency, determine: is there a runtime verification that the data was completely and correctly retrieved? Is there a defined standard for what "complete" means? If not, that is your first governance extension task.


From concept to control

Epistemic integrity is a powerful concept, but concepts alone do not reduce risk. The next module translates epistemic integrity into specific MASO control domains that you can require, implement, and audit.

The governance professional's role is not to implement these controls; it is to require them, to set the standard for what "adequate" means, and to verify that the standard is met. The MASO framework gives you the vocabulary to do that.


Next: MASO Controls →