R0043/2026-03-28/Q003/H1¶
Statement¶
Multiple researchers and organizations have identified the AI safety / regulated-industry vocabulary gap and are actively working to bridge it with shared taxonomies.
Status¶
Current: Partially supported
The broader terminology gap is well-recognized, and multiple organizations are building risk taxonomies and glossaries. However, these efforts are general — they do not specifically target the sycophancy/overreliance vocabulary gap.
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | Trilateral Research explicitly identifies "the AI terminology gap" as an operational problem |
| SRC02-E01 | Standardized Threat Taxonomy explicitly addresses the "Tower of Babel problem" between engineering and legal teams |
| SRC03-E01 | MIT AI Risk Repository synthesizes 65 documents into shared risk framework |
| SRC05-E01 | AIR 2024 maps 314 risk categories across government and corporate policies |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC04-E01 | Argues the problem is deeper than taxonomy — existing concepts do not translate across the human/machine divide |
Reasoning¶
H1 is partially supported because significant taxonomy-building efforts exist (MIT, AIR 2024, threat taxonomy, Trilateral Research). However, none of these specifically address the sycophancy/overreliance vocabulary gap. They address the broader challenge of AI risk terminology, within which sycophancy is a small component.
Relationship to Other Hypotheses¶
The evidence pattern matches H3 more closely than H1: the gap is recognized broadly but not for sycophancy specifically.