R0043/2026-03-28/Q001/H1¶
Statement¶
Each domain has developed its own specific terminology for the sycophancy phenomenon, creating a rich but fragmented vocabulary across all eight specified fields.
Status¶
Current: Partially supported
The evidence shows that several domains — particularly aviation/human factors, healthcare, and defense — have well-developed terminology. However, this terminology largely addresses the human side of the problem (overreliance, complacency, automation bias) rather than the system behavior side (sycophancy, people-pleasing). The vocabulary is rich but addresses a different causal framing than AI safety's "sycophancy."
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | Parasuraman & Manzey define automation bias and complacency as distinct but overlapping constructs used across aviation, healthcare, and military |
| SRC04-E01 | CSET report identifies automation bias as a cross-domain term used in Tesla, aviation, and military contexts |
| SRC07-E01 | DoD uses "calibrated trust," "overtrust," and "appropriate trust" as distinct domain terminology |
| SRC08-E01 | FAA/aviation uses "automation complacency" as established terminology |
| SRC09-E01 | Healthcare uses "acquiescence problem" and "automation bias" for clinical decision support |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC02-E01 | Within AI safety research, the term "sycophancy" is used consistently — not fragmented — suggesting convergence rather than fragmentation within a domain |
| SRC10-E01 | The vocabulary gap paper argues existing terminology is fundamentally insufficient, not merely fragmented |
Reasoning¶
H1 is partially supported because substantial domain-specific vocabulary does exist: aviation has "automation complacency," defense has "calibrated trust" and "overtrust," healthcare has "acquiescence problem" and "automation bias," and enterprise evaluation references "agreeableness bias." However, this vocabulary is not as richly developed for the system behavior as H1 predicts. Most domains frame the problem from the human operator's perspective (why humans over-trust) rather than from the system's perspective (why AI agrees when it should not). The AI safety term "sycophancy" has no direct equivalent in most regulated industries.
Relationship to Other Hypotheses¶
H1 overlaps significantly with H3 because the evidence shows vocabulary exists but with systematic asymmetries. The key distinction: H1 predicts each domain has its own terms, while H3 predicts some domains have terms and others borrow or lack them. The evidence more strongly supports H3.