R0048/2026-04-01/Q002/H2¶
Statement¶
Sycophancy is not addressed directly in training, but adjacent concepts (automation bias, overtrust, overreliance, need for critical evaluation) are partially covered, providing indirect protection against sycophancy-related risks.
Status¶
Current: Supported
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC06-E01 | NHS framework explicitly names automation bias and cognitive biases as training topics |
| SRC05-E01 | Microsoft provides training scenarios "where AI excels and where it might falter" — adjacent to sycophancy awareness |
| SRC03-E01 | Brookings recommends incorporating AI literacy into DOL workforce development to recognize uncertainty and bias reinforcement |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | Georgetown paper treats sycophancy as an unaddressed risk requiring new policy interventions, suggesting adjacent concepts are insufficient |
| SRC04-E01 | Science study shows AI is 49% more sycophantic than humans — scale of problem suggests training on "verify outputs" is insufficient |
Reasoning¶
The NHS framework's explicit mention of automation bias and the general "verify outputs" guidance in DOL/corporate training provide some adjacent protection. However, none of these materials connect these concepts to the specific pattern of AI systems actively prioritizing user agreement over accuracy. The distinction matters: automation bias training teaches "don't blindly trust AI," while sycophancy awareness would teach "AI is actively designed to agree with you." The former is passive caution; the latter requires understanding a specific behavioral mechanism.
Relationship to Other Hypotheses¶
H2 represents the middle ground. It acknowledges that training addresses some related concepts while recognizing the significant gap between adjacent-concept coverage and direct sycophancy awareness. It is the best-supported hypothesis but reveals a critical insufficiency.