R0043/2026-04-01/Q001/S02¶
WebSearch — Defense/military AI terminology for automation bias, complacency, and overtrust
Summary¶
| Field | Value |
|---|---|
| Source/Database | WebSearch |
| Query terms | military defense AI automation bias complacency agreeable output terminology |
| Filters | None |
| Results returned | 10 |
| Results selected | 2 |
| Results rejected | 8 |
Selected Results¶
| Result | Title | URL | Rationale |
|---|---|---|---|
| S02-R01 | AI Safety and Automation Bias - CSET Georgetown | https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Safety-and-Automation-Bias.pdf | Georgetown CSET issue brief bridging AI safety and defense terminology |
| S02-R02 | Complacency and Bias in Human Use of Automation - Parasuraman & Manzey | https://journals.sagepub.com/doi/10.1177/0018720810376055 | Foundational human factors paper defining automation complacency |
Rejected Results¶
Notes¶
Defense/military domain has well-established terminology from human factors research (automation bias, complacency, overtrust) but these predate AI-specific sycophancy concerns. The DOD has also established a Center for Calibrated Trust (CaTE) with its own terminology around "miscalibrated trust."