R0044/2026-03-29/Q001/SRC02/E01¶
EU AI Act Article 14 imposes system-side design requirements for human oversight that explicitly reference automation bias prevention.
URL: https://artificialintelligenceact.eu/article/14/
Extract¶
Article 14 states that "high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use."
The Article further requires that users remain "aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons."
The system must be provided in a way that allows the overseer to: understand its capabilities and limitations, detect and address issues, avoid over-reliance on the system, interpret its output, decide not to use it, or stop its operation.
Additionally, the AI Act requires that when interacting with AI systems such as chatbots, humans must be made aware they are interacting with a machine.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | The strongest system-side requirement found — systems SHALL be designed to enable oversight and awareness of automation bias |
| H2 | Contradicts | Directly contradicts H2 — this is an explicit, legally binding system design requirement |
| H3 | Supports | While the requirement is explicit, it addresses system design for oversight rather than constraining system output behavior. The system must help the user be aware of automation bias, but it need not refuse to produce agreeable output. |
Context¶
Article 14 is the closest any regulation comes to requiring system-side constraints on automation bias. However, it remains focused on enabling human oversight rather than constraining what the system outputs. The requirement is "design the system so that humans can oversee it effectively" rather than "design the system so it does not reinforce user assumptions." This is a meaningful but incomplete approach to the sycophancy problem.
Notes¶
The Article was not directly fetchable (403 error). Content is based on multiple secondary sources that consistently quote the same provisions, and on the full text of the EU AI Act published elsewhere.