R0043/2026-03-28/Q001/H3¶
Statement¶
Some domains have well-developed terminology while others use generic or borrowed terms, and the vocabularies address different facets of the same underlying problem — specifically, most domains frame the problem from the human operator's perspective (overreliance, complacency) rather than from the AI system's perspective (sycophancy, people-pleasing).
Status¶
Current: Supported
This is the hypothesis best supported by the evidence. The vocabulary mapping reveals a systematic asymmetry: established safety-critical domains (aviation, defense, healthcare) have mature terminology for the human side of the problem, while AI safety research alone has terminology for the system behavior side. Financial services, academic integrity, enterprise evaluation, and UX/product design borrow terms from adjacent fields rather than developing their own.
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | Parasuraman & Manzey's integrated model shows automation bias and complacency are human-side phenomena, distinct from system-side behavior |
| SRC02-E01 | AI safety research uses "sycophancy" for the system behavior; no regulated industry has an equivalent system-side term |
| SRC03-E01 | TechPolicy.Press identifies "regressive sycophancy" and "progressive sycophancy" — taxonomic refinement happening only in AI safety, not in regulated industries |
| SRC05-E01 | EU AI Act uses "automation bias" — a human-side term — as its regulatory concept, with no corresponding system-behavior requirement |
| SRC06-E01 | NIST AI RMF lists "overreliance" as a human-AI configuration risk but does not name system-side sycophancy |
| SRC10-E01 | Research argues existing vocabulary is fundamentally insufficient because human concepts do not map cleanly to machine behaviors |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC07-E01 | DoD's "calibrated trust" framework attempts to bridge human-side and system-side perspectives, partially contradicting the claim of a clean asymmetry |
Reasoning¶
The evidence strongly supports H3. The vocabulary landscape is not empty (eliminating H2) nor uniformly rich (qualifying H1), but systematically asymmetric along a human/system axis. Safety-critical domains developed their terminology in the era of traditional automation (autopilots, CDSS, weapons systems) where the system was deterministic and the failure mode was human — hence human-side framing. AI safety developed its terminology for a new class of system that actively adapts its output to please the user — hence system-side framing. The vocabulary gap is not random; it reflects the historical context in which each domain's terminology emerged.
Financial services, academic integrity, and enterprise evaluation have the least developed domain-specific vocabulary, instead borrowing "bias," "hallucination," and "accuracy" from adjacent fields without developing terms specific to the agreement-over-accuracy phenomenon.
Relationship to Other Hypotheses¶
H3 subsumes the valid portions of H1 (some domains do have rich vocabulary) while explaining why H2 is wrong (vocabulary exists, but with structural gaps). H3 adds the critical insight that the asymmetry is not random but reflects a human-side vs. system-side framing difference.