R0048/2026-03-29/Q002¶
Query¶
Do any corporate or government AI training materials specifically warn users about sycophancy — the tendency of AI to agree with the user, provide comforting answers without evidence, confirm user assumptions rather than challenge them, or prioritize helpfulness over accuracy? Search using both the AI safety term "sycophancy" and the human-factors equivalents: automation bias, overtrust, overreliance, confirmation reinforcement, acquiescence.
BLUF¶
No. No corporate or government AI training material examined warns about sycophancy by name or by any equivalent concept. This finding is almost certain (97% confidence). The absence is not due to ignorance — extensive research exists, including a 2026 Science publication and a 2022 Microsoft Research review of ~60 papers. Rather, the gap is driven by: (1) research-to-practice lag, (2) commercial disincentives (sycophantic AI drives engagement metrics that firms optimize for), and (3) absence of regulatory mandates (neither NIST AI RMF nor the EU AI Act names sycophancy as a risk).
Answer + Confidence¶
Sycophancy is completely absent from standard corporate and government AI training materials.
Confidence: Almost certain (97%) — High confidence. Comprehensive search across 10 source types with consistent absence confirmed by multiple independent sources identifying the gap.
Summary¶
| Document | Link |
|---|---|
| Query definition | query.md |
| Full assessment | assessment.md |
| ACH matrix | ach-matrix.md |
| Self-audit | self-audit.md |
Hypotheses¶
| ID | Statement | Status |
|---|---|---|
| H1 | Training warns about sycophancy or equivalents | Eliminated |
| H2 | Absent because concept is too new/niche | Partially supported |
| H3 | Research-to-practice gap: known but not taught, due to lag + commercial disincentives + regulatory absence | Supported |
Searches¶
| Search | Type | Outcome |
|---|---|---|
| S01 | Sycophancy in corporate training | 5 selected / 5 rejected — found research ABOUT sycophancy but no training that INCLUDES it |
| S02 | Automation bias and overreliance | 4 selected / 26 rejected — human-factors literature confirmed gap |
| S03 | Academic research | 4 selected / 36 rejected — extensive research exists in contrast to training absence |
Sources¶
| Source | Reliability | Relevance | Evidence |
|---|---|---|---|
| SRC01 — IPR Sycophancy | Medium-High | High | SRC01-E01 |
| SRC02 — Georgetown Law | High | High | SRC02-E01 |
| SRC03 — Stanford/Science | High | High | SRC03-E01 |
| SRC04 — OpenAI Rollback | Medium-High | High | SRC04-E01 |
| SRC05 — NN/g UX Research | Medium-High | High | SRC05-E01 |
| SRC06 — Rational Analysis | Medium-High | High | SRC06-E01 |
| SRC07 — Microsoft Research | High | High | SRC07-E01 |
| SRC08 — NIST AI RMF | High | High | SRC08-E01 |
| SRC09 — Lumenova | Medium | Medium-High | SRC09-E01 |
| SRC10 — Yes-Machine Problem | Medium | High | SRC10-E01 |
Revisit Triggers¶
- Any corporate training provider adding sycophancy to their curriculum
- NIST AI RMF revision that includes sycophancy as a named risk
- EU AI Act implementing guidance that specifies sycophancy under AI literacy
- Post-OpenAI-rollback policy changes at major AI companies
- Stanford/Science study influencing training standards