R0043/2026-03-28/Q002/SRC01/E01¶
EU AI Act Article 14 automation bias requirements
URL: https://artificialintelligenceact.eu/article/14/
Extract¶
Article 14 requires that high-risk AI systems be designed so that human overseers "remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias)."
Key regulatory characteristics: - Obligation placed on: Deployers (who use AI), not providers (who build AI) - Mechanism: Awareness and override capability, not system design constraints - Scope: High-risk systems only (medical devices, biometric identification, employment, law enforcement, etc.) - Effective: Phased implementation through 2026
JUDGMENT: This is the strongest existing regulatory requirement addressing the sycophancy phenomenon, but it is fundamentally indirect. It requires that humans be aware of overreliance risk and able to override, not that systems be designed to avoid sycophantic behavior. The obligation is on deployers to train their operators, not on providers to constrain their models.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Contradicts | Requirement is indirect, not direct and explicit |
| H2 | Contradicts | A requirement clearly exists |
| H3 | Strongly supports | Textbook example of indirect requirement: addresses the phenomenon through human oversight rather than system design |
Context¶
The EU AI Act's approach reflects the human-side vocabulary identified in Q001. Because regulators frame the problem as "automation bias" (a human cognitive failure), the solution is human-side: train the humans to be aware. A system-side framing ("sycophancy") would suggest a system-side solution: constrain the model.