SRC09-E01 — Automation Bias in Practice¶
Extract¶
"Overreliance on AI occurs when users excessively depend on AI systems, accepting AI outputs without critical evaluation." This is driven by "automation bias, where users trust automated systems more than their judgment." Once AI is integrated, "its outputs are often internalized as authoritative sources of truth. Because AI-generated outputs appear fluent and objective, they can be accepted uncritically, creating an inflated sense of confidence and a dangerous illusion of competence." A study found "in 40 per cent of tasks, knowledge workers — those who turn information into decisions or deliverables — accepted AI outputs uncritically with zero scrutiny."
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Contradicts — if training warned about automation bias, the 40% zero-scrutiny rate would be lower | Moderate |
| H2 | Supports — automation bias operates unchecked in the workplace | Strong |
| H3 | Supports — the human-factors dimension of sycophancy is not addressed in training | Moderate |
Context¶
The 40% zero-scrutiny finding is from a separate study cited by Lumenova. It represents the demand-side complement to sycophancy: AI provides agreeable outputs, and users accept them without checking.
Notes¶
Automation bias is the human-factors term for the same phenomenon that AI safety researchers call sycophancy effects. The convergence of these two research traditions — both identifying the same problem from different angles — strengthens the case that the issue is real and undertrained.