R0044/2026-04-01/Q001/H3¶
Statement¶
No regulated industry has produced any requirements — even partial or emerging — that address AI system behavior related to reinforcing user assumptions. The entire regulatory focus is on human operator behavior.
Status¶
Current: Eliminated
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC04-E01 | FDA CDS guidance is entirely human-side focused |
| SRC05-E01 | FINRA guidance focuses on supervisory procedures and human review |
| SRC06-E01 | FAA roadmap focuses on certification and testing processes |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC02-E01 | EU AI Act Article 14 explicitly requires systems be designed to enable awareness of automation bias |
| SRC03-E01 | NIST AI 600-1 identifies system-level risks including confabulation and recommends system-level mitigations |
Reasoning¶
H3 is too absolute. While the vast majority of requirements are human-side, the EU AI Act's design requirements and NIST's system-level risk identification demonstrate that at least some frameworks have begun addressing the system side, even if incompletely. The evidence eliminates this hypothesis in favor of H2.
Relationship to Other Hypotheses¶
H3 is the null hypothesis — that the regulatory landscape is entirely silent on system-side behavior. The existence of EU AI Act Article 14 design requirements and NIST AI 600-1's risk framework eliminates this hypothesis, though the evidence shows H3 was close to accurate for most individual sectors (FDA, FINRA, FAA) when considered in isolation.