R0044/2026-03-29/Q004/H2¶
Statement¶
CaTE's work focuses exclusively on measuring and calibrating human trust in AI systems, without addressing how AI systems should be designed or how they should adjust their own behavior.
Status¶
Current: Partially supported
CaTE's primary emphasis is clearly on human trust measurement and calibration. However, the work also addresses system design properties (transparency, explainability, reliability) as inputs to trust calibration. This means H2 is too narrow — CaTE does address system properties, just not system output behavior.
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC02-E01 | Sablon: "The human has to understand the capabilities and limitations of the AI system to use it responsibly" — human-focused framing |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | CaTE Guidebook addresses system design properties for trustworthiness, not just human trust measurement |
Reasoning¶
H2 is partially supported — the human-side focus is confirmed, but CaTE's scope extends to system design properties (H3's description), making H2 too narrow.
Relationship to Other Hypotheses¶
H2 and H3 overlap, with H3 being the more accurate characterization.