R0044/2026-03-29/Q001/SRC02
EU AI Act Article 14: Human Oversight Requirements
Source
| Field |
Value |
| Title |
EU AI Act — Article 14: Human Oversight |
| Publisher |
European Parliament and Council of the European Union |
| Author(s) |
European Parliament |
| Date |
2024 (enacted), phased implementation through 2026 |
| URL |
https://artificialintelligenceact.eu/article/14/ |
| Type |
Government source (legislation) |
Summary
| Dimension |
Rating |
| Reliability |
High |
| Relevance |
High |
| Bias: Missing data |
Low risk |
| Bias: Measurement |
N/A |
| Bias: Selective reporting |
Low risk |
| Bias: Randomization |
N/A — not an RCT |
| Bias: Protocol deviation |
N/A — not an RCT |
| Bias: COI/Funding |
Low risk |
Rationale
| Dimension |
Rationale |
| Reliability |
Primary legislation from the world's first comprehensive AI regulatory framework. Authoritative by definition. |
| Relevance |
Directly addresses system design requirements for automation bias prevention in high-risk AI systems. The most explicit system-side requirement found in this research. |
| Bias flags |
Legislative text — no commercial or research bias. Represents political compromise but is the enacted law. |
| Evidence ID |
Summary |
| SRC02-E01 |
Article 14 requires high-risk AI systems to be designed to enable oversight and awareness of automation bias tendency |