R0044/2026-03-29/Q003/H1¶
Statement¶
Published research or guidance explicitly connects the human-factors concept of automation bias/overtrust to the AI safety concept of sycophancy, recognizing them as related phenomena requiring integrated solutions.
Status¶
Current: Partially supported
The Georgetown CSET paper "AI Safety and Automation Bias" (November 2024) appears to be the strongest candidate for bridging work, as its title explicitly combines both vocabulary sets. However, the full text was not accessible for detailed analysis. The "Bending the Automation Bias Curve" paper mentions sycophancy in the context of automation bias in national security. The Springer systematic review on automation bias in human-AI collaboration (2025) spans cognitive psychology and human-computer interaction. However, no source was found that provides a systematic mapping of the two vocabularies or formally argues they describe the same phenomenon.
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | CSET paper combines "AI Safety" and "Automation Bias" in its title, suggesting bridging work |
| SRC02-E01 | Mentions sycophancy alongside automation bias in national security context |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC04-E01 | Treats sycophancy as a distinct AI-generated phenomenon without referencing automation bias literature |
| SRC03-E01 | Systematic review of automation bias does not mention sycophancy |
Reasoning¶
H1 is partially supported. The CSET paper appears to bridge the vocabularies based on its title, but full verification requires access to the complete text. Most sources treat automation bias and sycophancy in parallel without explicitly connecting them. The bridging that exists is emerging and implicit rather than systematic.
Relationship to Other Hypotheses¶
The evidence pattern sits between H1 and H3 — some bridging exists (not H2) but it is not yet systematic (not fully H1), pushing toward H3 as the best characterization.