Skip to content

R0043/2026-04-01/Q001/S02

Research R0043 — Sycophancy Vocabulary
Run 2026-04-01
Query Q001
Search S02

WebSearch — Defense/military AI terminology for automation bias, complacency, and overtrust

Summary

Field Value
Source/Database WebSearch
Query terms military defense AI automation bias complacency agreeable output terminology
Filters None
Results returned 10
Results selected 2
Results rejected 8

Selected Results

Result Title URL Rationale
S02-R01 AI Safety and Automation Bias - CSET Georgetown https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Safety-and-Automation-Bias.pdf Georgetown CSET issue brief bridging AI safety and defense terminology
S02-R02 Complacency and Bias in Human Use of Automation - Parasuraman & Manzey https://journals.sagepub.com/doi/10.1177/0018720810376055 Foundational human factors paper defining automation complacency

Rejected Results

Result Title URL Rationale
S02-R03 Automation bias - Wikipedia https://en.wikipedia.org/wiki/Automation_bias Encyclopedia; lower reliability
S02-R04 Military AI: Operational dangers - Diplo https://www.diplomacy.edu/blog/why-military-ai-needs-urgent-regulation/ Focuses on regulation not terminology
S02-R05 Algorithmic bias in AI military decision support - ICRC https://blogs.icrc.org/law-and-policy/2024/09/03/the-problem-of-algorithmic-bias-in-ai-based-military-decision-support-systems/ Focuses on algorithmic bias, not agreement-seeking behavior
S02-R06 AI in military decision-making - ICRC https://blogs.icrc.org/law-and-policy/2024/08/29/artificial-intelligence-in-military-decision-making-supporting-humans-not-replacing-them/ General human oversight discussion
S02-R07 Risks of AI in military targeting - ICRC https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/ Targeting-specific, not vocabulary-focused
S02-R08 Code, Command, and Conflict - Belfer Center https://www.belfercenter.org/research-analysis/code-command-and-conflict-charting-future-military-ai Strategic policy, not terminology
S02-R09 Comparative ethics of AI methods - PMC https://pmc.ncbi.nlm.nih.gov/articles/PMC9510613/ Ethics framework, not terminology mapping
S02-R10 Exploring automation bias and criminal responsibility - Oxford https://academic.oup.com/jicj/article/21/5/1077/7281035 Legal liability focus, not terminology

Notes

Defense/military domain has well-established terminology from human factors research (automation bias, complacency, overtrust) but these predate AI-specific sycophancy concerns. The DOD has also established a Center for Calibrated Trust (CaTE) with its own terminology around "miscalibrated trust."