R0044/2026-03-29/Q003 — Assessment¶
BLUF¶
The bridging between human-factors vocabulary (automation bias, overtrust) and AI safety vocabulary (sycophancy) is emerging but not yet systematic. Georgetown CSET's "AI Safety and Automation Bias" paper (November 2024) is the strongest candidate for bridging work, combining both terms in its title. The "Bending the Automation Bias Curve" paper mentions both in a national security context. However, no source was found that provides a systematic vocabulary mapping or formally argues that automation bias and sycophancy describe the same underlying phenomenon from different disciplinary perspectives. The human factors community's most comprehensive recent systematic review (2025, 35 studies) does not mention sycophancy. The AI safety community's most sophisticated sycophancy analysis identifies sycophancy as "system-generated" (vs. "user-driven") bias but connects to confirmation bias, not automation bias.
Probability¶
Rating: Unlikely (20-45%) that explicit, systematic bridging exists; Likely (55-80%) that partial/emerging bridging exists
Confidence in assessment: Medium
Confidence rationale: The CSET paper — the most likely candidate for bridging — could not be fully analyzed. This is the most significant gap. The other sources provide consistent evidence that the vocabularies are largely siloed with only incidental cross-references.
Reasoning Chain¶
- CSET's "AI Safety and Automation Bias" (Nov 2024) combines both terms in its title, suggesting at least one research group recognizes the connection [SRC01-E01, Medium-High reliability, High relevance]
- "Bending the Automation Bias Curve" (ISQ, 2024) mentions sycophancy alongside automation bias in national security contexts, but the connection is incidental [SRC02-E01, High reliability, Medium-High relevance]
- The most comprehensive systematic review of automation bias (35 studies, 2015-2025) does not mention sycophancy [SRC03-E01, High reliability, Medium relevance]
- The most sophisticated theoretical analysis of sycophancy distinguishes "system-generated" from "user-driven" bias but connects to confirmation bias, not automation bias [SRC04-E01, Medium reliability, Medium-High relevance]
- JUDGMENT: The conceptual ingredients for bridging exist — the rational analysis paper identifies system-generated vs. user-driven as the key distinction, which maps precisely to sycophancy (system) vs. automation bias (human). But no one has made this explicit connection in a systematic way.
Evidence Base Summary¶
| Source | Description | Reliability | Relevance | Key Finding |
|---|---|---|---|---|
| SRC01 | CSET AI Safety + Automation Bias | Medium-High | High | Title bridges vocabularies; full text not accessible |
| SRC02 | Bending the Automation Bias Curve | High | Medium-High | Incidental co-reference of both terms |
| SRC03 | Springer systematic review | High | Medium | Comprehensive automation bias review omits sycophancy |
| SRC04 | Rational Analysis of Sycophantic AI | Medium | Medium-High | Identifies system/user distinction but connects to confirmation bias |
Collection Synthesis¶
| Dimension | Assessment |
|---|---|
| Evidence quality | Medium — key source (CSET) inaccessible; other sources consistently show vocabulary gap |
| Source agreement | High — all sources agree the vocabularies are largely separate |
| Source independence | High — human factors community, AI safety community, national security community all searched independently |
| Outliers | CSET paper is the outlier — potentially bridges what other sources keep separate |
Detail¶
The vocabulary gap is real and consequential. Human factors researchers have decades of work on automation bias but frame it as a human cognitive vulnerability. AI safety researchers have recent work on sycophancy but frame it as a system training problem. The gap means that solutions designed in one community may not reach the other. The CSET paper at Georgetown — sitting at the intersection of AI safety and national security — may be the first to bridge this gap systematically, but verification requires access to the full text.
Gaps¶
| Missing Evidence | Impact on Assessment |
|---|---|
| Full text of CSET "AI Safety and Automation Bias" paper | This is the single most likely source of systematic vocabulary bridging. Its inaccessibility is the largest gap in this query. |
| Aviation human factors literature on AI-specific sycophancy | Aviation has the deepest automation bias tradition but may not yet engage with AI safety vocabulary |
| Medical informatics literature bridging CDS automation bias to LLM sycophancy | As LLMs enter clinical decision support, this bridge becomes critical |
Researcher Bias Check¶
Declared biases: No researcher profile provided.
Influence assessment: The query contains an implicit hypothesis that bridging should exist and its absence is a problem. This is a reasonable analytical position but could bias toward overstating any bridging evidence found.
Cross-References¶
| Entity | ID | File |
|---|---|---|
| Hypotheses | H1, H2, H3 | hypotheses/ |
| Sources | SRC01-SRC04 | sources/ |
| ACH Matrix | — | ach-matrix.md |
| Self-Audit | — | self-audit.md |