Skip to content

R0044/2026-03-29/Q004

Query: The DoD CaTE (Calibrated AI Trust and Expectations) center was identified as having the most sophisticated regulated-industry vocabulary for this problem. What has CaTE published about calibrating trust in AI systems, and does their work address the system-side behavior (AI adjusting output to match user expectations) or only the human-side behavior (users trusting AI too much)?

BLUF: CaTE has published a Guidebook and companion guides focused on trust measurement and trustworthiness evaluation for military AI systems, including LAWS. CaTE's work addresses system design properties (trustworthiness dimensions) and human trust calibration (measurement methods), but does NOT address system-side output behavior. The concept of AI systems adjusting their output to match or counteract user expectations (sycophancy) is absent from CaTE's vocabulary. CaTE operates on a "measure and inform" paradigm, not a "constrain and prevent" paradigm.

Answer: H3 (System properties but not output behavior) · Confidence: Medium


Summary

Entity Description
Query Definition Question as received, clarified, ambiguities, sub-questions
Assessment Full analytical product
ACH Matrix Evidence x hypotheses diagnosticity analysis
Self-Audit ROBIS-adapted 4-domain process audit

Hypotheses

ID Statement Status
H1 CaTE addresses both system-side and human-side behavior Eliminated
H2 CaTE addresses only human-side behavior Partially supported
H3 CaTE addresses system properties but not output behavior Supported

CaTE Published Outputs

Publication Type Focus
CaTE Guidebook Guidebook Trust, trustworthiness, calibrated trust, ethics for LAWS
Human Machine Teaming Design Framework Companion guide Human-centric design for non-deterministic systems
HSI T&E of AI Capabilities Companion guide Human Systems Integration test and evaluation strategy

Searches

ID Target Type Outcome
S01 CaTE publications and research WebSearch Found Guidebook and organizational descriptions
S02 CaTE Guidebook contents WebSearch PDF not extractable; metadata captured
S03 Related trust calibration frameworks WebSearch Found Sandia TCMM confirming paradigm

Sources

Source Description Reliability Relevance Evidence
SRC01 CaTE Guidebook High High 1 extract
SRC02 CMU/SEI CaTE overview Medium-High High 1 extract
SRC03 Sandia TCMM Medium-High Medium 1 extract

Revisit Triggers

  • Publication of new CaTE research papers or guidebook editions
  • CaTE expansion to address system output behavior or sycophancy prevention
  • DoD policy changes requiring AI systems to actively counteract operator overtrust
  • CaTE collaboration with AI safety researchers on trust-adaptive AI systems