R0043/2026-04-01/Q001/SRC03/E01¶
Defense/military domain terminology for automation-related trust and bias failures
URL: https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Safety-and-Automation-Bias.pdf
Extract¶
Key defense/military terms identified:
- Automation bias — the tendency to rely on and trust too much information provided by a machine; defined as "unquestioning trust in outputs produced by AI decision support systems"
- Automation complacency — monitoring with less vigilance, assuming the system is operating correctly; defined as "poorer detection of system malfunctions under automation compared with under manual control"
- Overtrust — inaccurate calibration between machine capabilities and human expectations
- Misuse — over-reliance on automation (one of four categories: use, misuse, disuse, abuse)
The brief notes that automation bias effects have contributed to fatal military decisions, including friendly-fire killings during the Iraq War (2004 study). Human operators experience cognitive distortion in high-pressure situations while AI systems provide outputs with a "confident tone," attributing credibility to recommendations.
JUDGMENT: The defense/military domain focuses on the human side of the interaction — how humans respond to AI outputs — rather than the model side (what AI safety calls sycophancy). The terms are complementary, not synonymous.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | Confirms defense domain has well-established, distinct terminology |
| H2 | Contradicts | Defense has rich vocabulary, not a gap |
| H3 | Supports | Terms describe human cognition (automation bias) not model behavior (sycophancy) — different framings |
Context¶
The defense terminology predates LLM-era sycophancy by two decades. It emerged from aviation and industrial automation research and was adopted by the military domain. The DOD Center for Calibrated Trust (CaTE) represents the institutional formalization of this vocabulary.