Skip to content

R0044/2026-03-29/Q003/SRC02/E01

Research R0044 — Expanded Vocabulary Research
Run 2026-03-29
Query Q003
Source SRC02
Evidence SRC02-E01
Type Analytical

"Bending the Automation Bias Curve" mentions sycophancy alongside automation bias in national security contexts, finding that human factors (experience, knowledge, trust) are the key drivers of over-reliance.

URL: https://academic.oup.com/isq/article/68/2/sqae020/7638566

Extract

The paper tested automation bias theories across 9,000 adults in 9 countries. Key finding: "Research on automation bias suggests that humans can often be overconfident in AI, whereas research on algorithm aversion shows that, as the stakes of a decision rise, humans become more cautious about trusting algorithms."

The paper identifies "experiential and attitudinal" factors as key drivers: task difficulty, background knowledge and experience with AI, trust and confidence in AI, and self-confidence. It also identifies a Dunning-Kruger effect where those with the lowest AI experience are slightly more algorithm-averse, then automation bias increases before leveling off at higher knowledge levels.

JUDGMENT: The paper mentions both automation bias and sycophancy but does not systematically map the relationship between them. Sycophancy appears as a passing reference rather than a structured conceptual bridge.

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Supports Both terms appear in the same paper, indicating awareness of both vocabularies
H2 Contradicts The co-occurrence eliminates complete siloing
H3 Supports The connection is incidental rather than the focus of the work — exactly what H3 describes

Context

This paper represents the national security research community's awareness of both phenomena, but it frames them primarily through the human factors lens (what drives human over-reliance) rather than the AI safety lens (what makes AI systems sycophantic).