Skip to content

R0044/2026-04-01/Q002/S03

WebSearch — Automation bias in military and professional AI decision-making

Summary

Field Value
Source/Database WebSearch
Query terms 'acquiescence problem AI systems "calibrated trust" "inappropriate trust" military defense standards'
Filters None
Results returned 10
Results selected 1
Results rejected 9

Selected Results

Result Title URL Rationale
S03-R01 Bending the Automation Bias Curve (ISQ/Oxford) https://academic.oup.com/isq/article/68/2/sqae020/7638566 Large-scale experimental study of automation bias in national security AI

Rejected Results

Result Title URL Rationale
S03-R02 Various military AI trust articles (9 results) Various Included: Brookings trust analysis, Stanford CISAC, War on the Rocks, Oxford academic, and others. All address military AI trust but either focus on human-side trust calibration or do not document specific incidents of AI agreement causing harm.

Notes

Military AI literature extensively discusses trust calibration and automation bias but focuses on human over-reliance on AI recommendations rather than AI systems adjusting output to match operator expectations. The Horowitz & Kahn study was the most relevant empirical contribution.