R0044/2026-04-01/Q004/H2¶
Statement¶
CaTE addresses both system trustworthiness (is the AI reliable?) and operator trust (does the human trust it appropriately?), but the emphasis is on human-side trust measurement, and the system-side work focuses on evaluation/testing rather than constraining AI output behavior.
Status¶
Current: Supported
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | Guidebook addresses TEVV — system testing and evaluation |
| SRC02-E01 | States scope covers both "evaluating operator trust" and "assuring trustworthiness" |
| SRC03-E01 | Emphasizes "human-system balance of roles and responsibilities" |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| None | No evidence contradicts this characterization |
Reasoning¶
All three sources converge on the same picture: CaTE works on both sides but with a clear emphasis on measuring operator trust and evaluating system trustworthiness through testing. The "calibrated trust" concept is fundamentally about ensuring the human's trust level matches the system's actual capabilities — which is a human-side calibration goal, not a system-side behavioral constraint.
Relationship to Other Hypotheses¶
This is the best-supported hypothesis. H1 overstates system-side coverage; H3 understates it.