Skip to content

R0044/2026-04-01/Q004/SRC02/E01

Research R0044 — Expanded Vocabulary Research
Run 2026-04-01
Query Q004
Source SRC02
Evidence SRC02-E01
Type Factual

CaTE's stated dual scope from SEI annual review

URL: https://www.sei.cmu.edu/annual-reviews/2023-year-in-review/assuring-trustworthiness-of-ai-for-warfighters/

Extract

CaTE's stated scope: - Mission: "Establish methods for assuring trustworthiness in AI systems with emphasis on interaction between humans and autonomous systems" - Dual focus: "Standards, methods, and processes for providing evidence for assurance" (system side) AND "developing measures to determine calibrated levels of trust" (human side) - Key concept: "How systems interact with each other, and especially the interactions between AI and humans" - Vocabulary: Uses "calibrated trust," "safe and reliable operation," "human understanding of AI capabilities and limitations" - Does NOT use: sycophancy, acquiescence, agreement bias, automation bias

The review describes CaTE as addressing "human-machine teaming and measurable trust" across military branches. The "human has to understand the capabilities and limitations of the AI system to use it responsibly" — framing the problem as human understanding, not system behavior.

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Contradicts No evidence of system output behavioral constraints
H2 Supports Dual scope stated, but human trust measurement is emphasized
H3 Contradicts System trustworthiness evaluation is within scope

Context

The SEI annual review provides the most detailed public description of CaTE's scope and approach. The vocabulary is entirely from the defense/human-factors tradition (calibrated trust, human-machine teaming, trustworthiness assurance) with no AI safety vocabulary (sycophancy, alignment, RLHF).