Skip to content

R0021/2026-03-25/Q007 — Assessment

BLUF

Substantial published research exists on AI decision auditing and explainability. DARPA invested in a 4-year XAI program (2017-2021) with ~12,700 user study participants. The EU has legislated explainability requirements through GDPR Article 22 and AI Act Article 86. A systematic search found 2,425 XAI articles published between 2022-2025 alone. However, practical challenges remain: post-hoc explanation methods (LIME, Shapley Values) are approximations, and constrained traceability hinders error identification and regulatory compliance.

Probability

Rating: Almost certain (95-99%) that substantial research exists; Very likely (80-95%) that practical challenges persist.

Confidence in assessment: High

Confidence rationale: Multiple high-quality sources from government programs and regulatory frameworks.

Reasoning Chain

  1. DARPA XAI (2017-2021) was a major government investment in AI explainability with ~12,700 study participants [SRC01-E01, High reliability, High relevance]
  2. EU mandates explainability through GDPR Art 22 ("meaningful information about the logic involved") and AI Act Art 86 ("clear and meaningful explanations") [SRC02-E01, High reliability, High relevance]
  3. 2,425 XAI papers published 2022-2025 demonstrate active and growing research [SRC03-E01, High reliability, High relevance]
  4. Post-hoc methods are approximations; "constrained traceability obstructs identification and rectification of errors" [SRC03-E01]
  5. JUDGMENT: The contrast with prompt engineering is significant — AI decision auditing has formal research programs, regulatory requirements, and thousands of papers. Prompt engineering has none of these.

Evidence Base Summary

Source Description Reliability Relevance Key Finding
SRC01 DARPA XAI High High 4-year program, ~12,700 participants
SRC02 EU AI Act/GDPR High High Legislated explainability requirements
SRC03 XAI Research Review High High 2,425 articles (2022-2025)

Collection Synthesis

Dimension Assessment
Evidence quality Robust
Source agreement High
Source independence Independent — government, regulatory, and academic sources
Outliers None

Gaps

Missing Evidence Impact on Assessment
DARPA XAI specific outcomes and metrics Moderate — would strengthen assessment of practical viability
Industry adoption rates of XAI methods Moderate

Researcher Bias Check

Declared biases: Researcher benefits from showing AI auditing is a serious field (contrasts with prompt engineering's informality).

Influence assessment: Evidence is from authoritative sources. Finding is factual.

Cross-References

Entity ID File
Hypotheses H1, H2, H3 hypotheses/
Sources SRC01-SRC03 sources/
ACH Matrix ach-matrix.md
Self-Audit self-audit.md