Skip to content

R0044/2026-04-01/Q001 — Self-Audit

ROBIS 4-Domain Audit

Domain 1: Eligibility Criteria

Rating: Low risk

Criterion Assessment
Criteria defined before searching Yes — system-side vs. human-side distinction established before searches
Criteria applied consistently Yes — same distinction applied across all sectors
Criteria shift detected No — criteria remained stable throughout

Notes: The system-side vs. human-side distinction was clear from the query and applied consistently to all sources.

Domain 2: Search Comprehensiveness

Rating: Some concerns

Criterion Assessment
Multiple search strategies used Yes — 3 searches across regulatory standards, cross-sector frameworks, and sector-specific guidance
Searches designed to test each hypothesis Yes — searched for both the existence and absence of system-side requirements
All results dispositioned Yes — 60 results returned, all dispositioned
Source diversity achieved Yes — EU legislation, US federal standards, sector-specific regulators

Notes: Concern: Several key PDFs (NIST AI 600-1, CSET brief, GAO report) were inaccessible for full-text analysis. Secondary sources were used, but full-text analysis might have revealed more specific system-side provisions. Additionally, classified/restricted procurement specifications were not accessible.

Domain 3: Evaluation Consistency

Rating: Low risk

Criterion Assessment
All sources scored using same framework Yes — reliability, relevance, and bias dimensions applied uniformly
Evidence typed consistently Yes — Factual vs. Analytical typing applied consistently
ACH matrix applied Yes — all evidence mapped to all hypotheses
Diagnosticity analysis performed Yes — most and least diagnostic evidence identified

Notes: Scoring was consistent across sources regardless of whether they supported or contradicted the researcher's likely expectation.

Domain 4: Synthesis Fairness

Rating: Low risk

Criterion Assessment
All hypotheses given fair hearing Yes — H1 (enforceable requirements exist) was actively searched for despite the evidence trending toward H2
Contradictory evidence surfaced Yes — EU AI Act Article 14 was surfaced as the strongest partial exception to the dominant finding
Confidence calibrated to evidence Yes — Medium confidence reflects accessible published frameworks but inaccessible procurement specifications
Gaps acknowledged Yes — classified procurement specs, inaccessible PDFs, and forthcoming EU implementing regulations all documented

Notes: The finding aligns with what the query framing suggested, but the assessment is supported by evidence from multiple independent regulatory bodies.

Domain 5: Source-Back Verification

Rating: Low risk

Source Claim in Assessment Source Actually Says Match?
SRC02 Article 14 requires systems be "designed" for automation bias awareness Article 14 uses "designed and developed in such a way" language targeting providers Yes
SRC04 FDA relaxed multiple alternatives requirement Cooley analysis confirms singular recommendations now qualify for enforcement discretion Yes
SRC03 NIST identifies confabulation as a risk Secondary sources confirm confabulation is one of 12 identified GenAI-specific risks Yes
SRC05 FINRA focuses on supervisory procedures FINRA guidance references Rules 3110/3120 supervisory requirements Yes

Discrepancies found: 0

Corrections applied: None needed

Unresolved flags: None

Notes: All claims verified against source content. The limitation is that some verification relied on secondary source analysis (Cooley for FDA, various summaries for NIST) rather than primary document full-text.

Overall Assessment

Overall risk of bias: Low risk

The research process was systematic, covering all four target sectors plus cross-sector frameworks. The main limitation is inaccessibility of some primary documents (PDFs) and the inherent limitation that procurement specifications may contain requirements not visible in published regulatory frameworks.

Researcher Bias Check

  • Confirmation bias risk: The query framing implied the researcher expected a gap in system-side requirements. The evidence confirmed this expectation, raising the risk that the search was optimized to confirm the gap rather than find exceptions. Mitigated by: actively searching for system-side requirements using expanded vocabulary; identifying EU AI Act Article 14 as a partial exception; documenting inaccessible sources that might contain stronger system-side provisions.
  • Availability bias risk: Publicly accessible regulatory documents are easier to find than procurement specifications or internal agency guidance. The assessment may underrepresent system-side requirements that exist in less accessible formats.