R0043/2026-03-28/Q001/SRC01/E01¶
Integrated model distinguishing automation bias from automation complacency across domains
URL: https://journals.sagepub.com/doi/10.1177/0018720810376055
Extract¶
Parasuraman & Manzey (2010) establish that automation bias and automation complacency are "different manifestations of overlapping automation-induced phenomena, with attention playing a central role." Key distinctions:
-
Automation bias: The tendency to "over-accept computer output as a heuristic replacement of vigilant information seeking." This is an active decision bias — users favor automated recommendations even when contradictory information is available. Manifests as commission errors (following incorrect automated advice) and omission errors (failing to notice when automation does not flag a problem).
-
Automation complacency: Insufficient monitoring of automation output, particularly when systems are deemed reliable. This is a passive attention failure — users reduce vigilance because they trust the system.
Cross-domain usage documented: - Aviation: 6 studies — uses both terms, with complacency as the established historical term - Healthcare/CDSS: 15 studies — primarily uses "automation bias" for commission/omission errors - Military: 6 studies — uses both terms - Generic HCI: 27 studies — both terms
The systematic review found a risk ratio of 1.26 (95% CI 1.11-1.44) for erroneous advice being followed in CDSS groups — automation bias increased error rates by 26% in healthcare settings.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | Confirms rich cross-domain vocabulary exists: aviation, healthcare, military, and HCI all use documented terminology |
| H2 | Contradicts | Directly refutes the claim that no vocabulary exists outside AI safety |
| H3 | Supports | Demonstrates that terminology focuses on human-side phenomena (overreliance, complacency) rather than system-side behavior (sycophancy) |
Context¶
This paper predates the AI sycophancy literature by over a decade. It establishes the vocabulary for traditional automation (autopilots, CDSS, military systems) where the system was deterministic. The key insight for the vocabulary mapping: these terms describe what happens to the human (bias, complacency), not what the system does (sycophancy, people-pleasing). This framing difference is the central finding of Q001.
Notes¶
The distinction between commission errors (actively following bad advice) and omission errors (failing to notice absent alerts) provides finer granularity than the AI safety term "sycophancy" addresses. Healthcare's usage is particularly rich because of the measurable patient safety implications.