Skip to content

R0048/2026-04-01/Q002

Query: Do any corporate or government AI training materials specifically warn users about sycophancy — the tendency of AI to agree with the user, provide comforting answers without evidence, confirm user assumptions rather than challenge them, or prioritize helpfulness over accuracy?

BLUF: No publicly visible corporate or government AI training program warns employees about sycophancy. The term does not appear in any training material found. The NHS framework addresses automation bias (the human side) but not sycophancy (the AI side). There is a complete disconnect between AI safety research on sycophancy and what employees are taught.

Probability: N/A (open-ended query) | Confidence: Medium-High


Summary

Entity Description
Query Definition Query text, scope, status
Assessment Full analytical product with reasoning chain
ACH Matrix Evidence x hypotheses diagnosticity analysis
Self-Audit ROBIS-adapted 5-domain audit (process + source verification)

Hypotheses

ID Hypothesis Status
H1 Training warns about sycophancy Eliminated
H2 Adjacent concepts (automation bias) partially covered Supported
H3 No related concepts addressed at all Partially supported

Searches

ID Target Results Selected
S01 Sycophancy in training materials 10 4
S02 Automation bias and overtrust 10 2
S03 Policy analysis (Brookings, Georgetown) 2 1

Sources

Source Description Reliability Relevance
SRC01 Georgetown sycophancy policy analysis High High
SRC02 IPR workplace sycophancy risk Medium-High High
SRC03 Brookings AI mirror analysis High High
SRC04 Science journal sycophancy study High High
SRC05 Lumenova automation bias analysis Medium High
SRC06 NHS automation bias framework High Medium-High

Revisit Triggers

  • Any major training provider adds sycophancy to its curriculum
  • NIST or ISO publish AI sycophancy testing or disclosure standards
  • Georgetown or Brookings policy recommendations adopted by federal regulators
  • EU AI Act enforcement actions cite sycophancy as a training gap
  • DOL updates AI Literacy Framework to include sycophancy
  • Major AI company (OpenAI, Anthropic, Google) publishes user-facing sycophancy warnings