R0043/2026-03-28/Q001/SRC05/E01¶
EU AI Act's legal definition of automation bias in Article 14
URL: https://artificialintelligenceact.eu/article/14/
Extract¶
The EU AI Act defines automation bias as the "tendency of automatically relying or over-relying on the output produced by a high-risk AI system."
Article 14 requires that high-risk AI systems be designed so that human overseers "remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions."
Required oversight capabilities include: - Properly understanding the AI system's capacities and limitations - Duly monitoring its operation - Correctly interpreting the AI system's output - Ability to decide not to use the system or disregard its output
JUDGMENT: The EU AI Act chose "automation bias" — a human-side term — as its regulatory concept. It does not name system-side behavior (sycophancy). The legal framing places the obligation on deployers to ensure human awareness of overreliance risk, not on providers to prevent the system from being agreeable.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | Confirms regulatory domain has its own codified term |
| H2 | Contradicts | Regulation explicitly names the phenomenon |
| H3 | Strongly supports | The most authoritative regulatory text uses human-side framing exclusively — no system-behavior equivalent to sycophancy appears in the legislation |
Context¶
The EU AI Act is the world's first comprehensive AI regulation. Its choice of "automation bias" as the operative term, rather than "sycophancy" or any system-behavior term, will likely influence global regulatory vocabulary for years.