R0044/2026-04-01/Q002/SRC05/E01¶
Automation bias rates in military AI decision-making scenarios
URL: https://academic.oup.com/isq/article/68/2/sqae020/7638566
Extract¶
Key findings from the 9,000-respondent study:
- Switching rates: Participants with moderate AI exposure showed peak automation bias, switching their answers to match AI recommendations 25-29% of the time. Those with minimal exposure showed algorithm aversion; those with extensive experience showed more balanced reliance.
- Dunning-Kruger effect: "Those with the lowest level of experience with AI are slightly more likely to be algorithm-averse, then automation bias occurs at lower levels of knowledge before leveling off."
- Self-confidence effect: Respondents with perfect practice scores switched answers only 18% of the time versus 25%+ for those with zero correct responses — less-competent users are more susceptible.
- Trust amplification: High system-confidence treatments (describing thorough testing) consistently increased answer switching across demographics.
Important distinction: This study measures automation bias (human over-reliance on AI recommendations) rather than AI sycophancy (AI adjusting output to match user expectations). The AI in this study gave consistent recommendations regardless of user input. The harm mechanism is different: automation bias is the human failing to question correct or incorrect AI output, while sycophancy is the AI actively producing agreeable output.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports partially | Quantified automation bias in national security context, but measures a related-but-different mechanism |
| H2 | Supports | Lab-based evidence in military domain; related mechanism but not sycophancy-specific |
| H3 | Contradicts | Demonstrates measurable behavioral change from AI recommendations in military context |
Context¶
This study represents the military domain's contribution to the evidence base. It demonstrates that automation bias is measurable in national security contexts and that certain user profiles (moderate AI exposure, low self-confidence) are particularly vulnerable. However, it does not address the specific scenario where the AI adjusts its output to match user expectations.