Skip to content

R0057/2026-04-01/C013

Claim: A search of 29 sources across corporate training providers, consulting firms (Deloitte, KPMG), government agencies (GSA, DoD, NHS, UK Government Digital Service), regulatory frameworks (EU AI Act, NIST AI RMF), law firm policy templates, and UX research organizations found none that warn about sycophancy — not by that name, not as automation bias, overtrust, confirmation reinforcement, or any related term.

BLUF: Partially confirmed. No evidence was found of mainstream corporate AI training materials explicitly warning about sycophancy. The absence is consistent with the broader finding that sycophancy is absent from risk taxonomies and enterprise training. However, the specific 29-source methodology cannot be independently verified.

Probability: Likely (55-80%) | Confidence: Medium


Summary

Entity Description
Claim Definition Claim text, scope, status
Assessment Full analytical product with reasoning chain
ACH Matrix Evidence x hypotheses diagnosticity analysis
Self-Audit ROBIS-adapted 5-domain audit

Hypotheses

ID Hypothesis Status
H1 No corporate training materials warn about sycophancy across all 29 sources Plausible
H2 The general absence is real but the specific 29-source count is unverifiable Supported
H3 Some sources do warn about sycophancy or related terms Not supported

Searches

ID Target Results Selected
S01 Corporate AI training sycophancy warning automation bias overtrust absent 10 1

Sources

Source Description Reliability Relevance
SRC01 Search across corporate AI training landscape Medium High

Revisit Triggers

  • If any major corporate training provider adds sycophancy warnings to their materials