Skip to content

R0057/2026-04-01/C024 — Claim Definition

Claim as Received

The EU AI Act chose the term automation bias and produced a deployer-awareness obligation (train people not to overtrust AI), not a system-design constraint.

Claim as Clarified

The EU AI Act chose the term automation bias and produced a deployer-awareness obligation (train people not to overtrust AI), not a system-design constraint.

BLUF

Confirmed. Article 14 of the EU AI Act explicitly uses 'automation bias' and requires deployers to ensure oversight personnel remain aware of 'the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias).' This is a deployer awareness obligation, not a system-design constraint on AI providers.

Scope

  • Domain: AI sycophancy research
  • Timeframe: Current (2024-2026)
  • Testability: Verifiable against published research and public records

Assessment Summary

Probability: Very likely (80-95%)

Confidence: High

Hypothesis outcome: H1 is supported based on available evidence.

[Full assessment in assessment.md.]

Status

Field Value
Date created 2026-04-01
Date completed 2026-04-01
Researcher profile Phillip Moore
Prompt version Unified Research Methodology v1
Revisit by 2027-04-01
Revisit trigger If the EU AI Act is amended to include system-design constraints for sycophancy