R0044/2026-04-01/Q001/H1¶
Statement¶
Regulated industries have produced specific, enforceable requirements that constrain AI system behavior itself — not just human operator behavior — to prevent systems from reinforcing user assumptions or providing agreeable-but-incorrect output.
Status¶
Current: Eliminated
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC02-E01 | EU AI Act Article 14 requires systems be designed to enable awareness of automation bias |
| SRC03-E01 | NIST AI 600-1 identifies confabulation and human-AI configuration as risks requiring mitigation |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC04-E01 | FDA CDS guidance focuses on human-side independent review, not system-side output constraints |
| SRC05-E01 | FINRA guidance addresses supervisory procedures, not AI output behavior |
| SRC06-E01 | FAA roadmap addresses certification and testing, not output behavioral constraints |
Reasoning¶
While fragments of system-side requirements exist (EU AI Act's design requirements, NIST's risk identification), none constitute specific, enforceable requirements that directly constrain how an AI system generates output to prevent sycophantic or assumption-reinforcing behavior. The existing requirements are either aspirational (NIST), focused on transparency rather than output behavior (EU AI Act), or entirely human-side (FDA, FINRA, FAA).
Relationship to Other Hypotheses¶
H1 represents the strongest version of the claim — that mature, enforceable system-side requirements exist. The evidence does not support this. H2 (partial/emerging requirements) is better supported, while H3 (no requirements at all) is too strong given the EU AI Act and NIST provisions.