R0043/2026-03-28/Q002/H2¶
Statement¶
No regulated industry has formal requirements addressing agreeable-but-wrong AI output.
Status¶
Current: Eliminated
Multiple regulated industries have requirements that indirectly address the phenomenon, even if none directly target system-side sycophancy. The EU AI Act, FDA CDS guidance, DoD RAI principles, and NIST AI RMF all include provisions that functionally address overreliance on AI output.
Supporting Evidence¶
No evidence supports H2.
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | EU AI Act Article 14 mandates automation bias awareness for high-risk systems |
| SRC02-E01 | FDA CDS guidance requires independent review capability to prevent automation bias |
| SRC04-E01 | NIST GAI Profile identifies overreliance as a named risk with mitigation guidance |
Reasoning¶
While no requirement directly targets sycophancy by name, the existence of human oversight mandates, automation bias awareness requirements, and trustworthiness criteria constitutes a regulatory response to the same underlying problem. H2 is eliminated because it claims NO requirements exist.
Relationship to Other Hypotheses¶
H2's elimination confirms that the regulatory landscape is not empty — the question is whether requirements are direct (H1) or indirect (H3). Evidence strongly supports indirect.