R0043/2026-04-01/Q001/H3¶
Statement¶
The terms used across domains describe fundamentally different phenomena, not the same phenomenon under different names. What AI researchers call "sycophancy" (a model property) is categorically different from what human factors researchers call "automation bias" (a human cognitive tendency), and a vocabulary map would be misleading.
Status¶
Current: Partially Supported
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC05-E01 | Oxford researchers formally distinguish sycophancy (model tendency) from overreliance (human behavior) from automation bias (human cognition), noting these are related but separate phenomena |
| SRC06-E01 | "Acquiescence bias" in LLMs produced opposite results to human acquiescence bias, suggesting the phenomena are not identical |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC09-E01 | Despite different terminology, the practical outcome is the same: agreeable-but-wrong AI output leads to harm |
| SRC04-E01 | Cross-domain mapping shows convergence on the same harmful outcome regardless of which end of the human-AI interaction is named |
Reasoning¶
This hypothesis is partially valid and represents an important caveat to the vocabulary map. The terms genuinely do describe different causal framings: sycophancy is about what the model does; automation bias is about how humans respond; overreliance is the behavioral outcome. A vocabulary map must acknowledge these distinctions rather than treating all terms as synonyms. However, the practical convergence — agreeable-but-wrong output causing harm — means the terms are related enough that cross-domain awareness is valuable.
Relationship to Other Hypotheses¶
H3 provides a crucial qualifier to H1. The vocabulary map (H1) is constructable, but it must be a map of related concepts with different causal framings rather than a simple synonym table. This nuance is itself a finding.