R0043/2026-04-01/Q002/SRC01/E01¶
EU AI Act Article 14 mandates awareness of automation bias in high-risk AI systems
URL: https://artificialintelligenceact.eu/article/14/
Extract¶
High-risk AI systems must be designed "to enable natural persons assigned human oversight to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), particularly for systems used to provide information or recommendations for decisions."
The EU AI Act defines automation bias as "the tendency of automatically relying or over-relying on the output produced by a high-risk AI system."
Article 14 establishes the first norm of EU law that explicitly mentions cognitive bias (per academic commentary in the European Journal of Risk Regulation).
JUDGMENT: The EU AI Act addresses the human response to AI outputs (automation bias) but not the model behavior that produces agreeable outputs (sycophancy). A sycophantic model that produces confident-sounding wrong answers would trigger the automation bias concern, but the regulation does not require testing for or mitigating sycophancy at the model level.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | Formal regulation exists addressing the phenomenon indirectly |
| H2 | Contradicts | The phenomenon IS addressed, just under different terminology |
| H3 | Supports | Addresses human cognition side (automation bias) not model behavior side (sycophancy) |
Context¶
The EU AI Act is the most advanced regulatory framework for AI globally as of April 2026. Its inclusion of "automation bias" represents the closest any regulation comes to addressing the sycophancy phenomenon, but it addresses the human side of the equation rather than the model side.