R0043/2026-03-28/Q001 — ACH Matrix¶
Matrix¶
| H1: Rich cross-domain vocabulary | H2: No cross-domain vocabulary | H3: Partial with systematic gaps | |
|---|---|---|---|
| SRC01-E01: Automation bias/complacency integrated model across 4 domains | ++ | -- | + |
| SRC02-E01: AI safety uses "sycophancy" consistently; no internal fragmentation | + | N/A | + |
| SRC03-E01: Regressive/progressive taxonomy only in AI safety | + | -- | ++ |
| SRC04-E01: "Automation bias" used across Tesla, aviation, military | ++ | -- | + |
| SRC05-E01: EU AI Act codifies "automation bias" in legislation | + | -- | ++ |
| SRC06-E01: NIST lists 5 human-side risk terms, no system-side | + | -- | ++ |
| SRC07-E01: DoD "calibrated trust" framework with institutional investment | ++ | -- | + |
| SRC08-E01: Aviation says own terms insufficient for AI | - | -- | ++ |
| SRC09-E01: Healthcare "acquiescence problem" closest to system-side | + | -- | ++ |
| SRC10-E01: All existing vocabulary fundamentally insufficient | - | - | + |
Legend:
- ++ Strongly supports
- + Supports
- -- Strongly contradicts
- - Contradicts
- N/A Not applicable to this hypothesis
Diagnosticity Analysis¶
Most Diagnostic Evidence¶
| Evidence ID | Why Diagnostic |
|---|---|
| SRC08-E01 | Aviation insiders calling their own vocabulary insufficient discriminates strongly between H1 (which predicts rich vocabulary) and H3 (which predicts gaps). Only H3 is consistent with domain experts acknowledging their own terminology limitations. |
| SRC05-E01 | The EU AI Act choosing "automation bias" (human-side) as its legal term discriminates between H1 (would predict system-side terms in regulation) and H3 (predicts human-side framing in regulation). |
| SRC09-E01 | Healthcare's "acquiescence problem" is diagnostic because it is the only term from a regulated industry that approaches system-side framing — confirming the gap is not total (contra H2) but is systematic (supporting H3). |
Least Diagnostic Evidence¶
| Evidence ID | Why Non-Diagnostic |
|---|---|
| SRC04-E01 | CSET's cross-domain case studies use "automation bias" throughout, which supports both H1 and H3 without discriminating between them — vocabulary exists but the question is whether it is sufficient. |
| SRC02-E01 | AI safety's consistent use of "sycophancy" is expected under all three hypotheses and does not discriminate between them. |
Outcome¶
Hypothesis supported: H3 — The vocabulary landscape is partial with systematic gaps. The human-side/system-side asymmetry is the central finding, supported by 8 of 10 evidence items.
Hypotheses eliminated: H2 — Conclusively eliminated. Every domain has at least some relevant terminology.
Hypotheses inconclusive: H1 — Partially supported but insufficient. Rich vocabulary exists in some domains (defense, healthcare, aviation) but with structural gaps that H1 does not account for.