R0043/2026-03-28/Q001/SRC04/E01¶
Cross-domain automation bias terminology and case studies
URL: https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/
Extract¶
CSET defines automation bias as "the tendency for an individual to over-rely on an automated system" which "can lead to increased risk of accidents, errors, and other adverse outcomes when individuals and organizations favor the output or suggestion of the system, even in the face of contradictory information."
Key cross-domain findings: - Tesla autopilot incidents: Terminology used includes "overreliance," "automation complacency," and "misuse of automation" - Aviation (Boeing/Airbus): "Automation bias," "complacency," "mode confusion," and "loss of situational awareness" - Military (Army/Navy air defense): "Automation bias," "over-trust," and "failure of human oversight"
The report frames automation bias as endangering "the successful use of artificial intelligence by eroding the user's ability to meaningfully control an AI system."
Mitigation is framed through three lenses: organizational, technical, and educational — with training, design changes, and process reforms recommended.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | Shows multiple domains have terminology, though "automation bias" serves as a shared umbrella term rather than domain-specific language |
| H2 | Contradicts | Multiple domains clearly have named this phenomenon |
| H3 | Supports | "Automation bias" is human-side framing; no case study uses system-behavior terminology equivalent to "sycophancy" |
Context¶
This is one of the few sources that explicitly bridges AI safety (in its title) with traditional automation terminology. The fact that it uses "automation bias" throughout — never "sycophancy" — illustrates the vocabulary gap between AI safety and regulated industry discourse.