Q002 — Sycophancy Warnings — Query Definition¶
Query as Received¶
Do any corporate or government AI training materials specifically warn users about sycophancy — the tendency of AI to agree with the user, provide comforting answers without evidence, confirm user assumptions rather than challenge them, or prioritize helpfulness over accuracy? Search using both the AI safety term "sycophancy" and the human-factors equivalents: automation bias, overtrust, overreliance, confirmation reinforcement, acquiescence. Include any training that warns users the AI may tell them what they want to hear rather than what is true.
Query as Clarified¶
- Subject: Presence or absence of sycophancy warnings in corporate/government AI training materials
- Scope: Same organizations as Q001; expanded to include academic research and policy analysis to establish what is known vs. what is taught
- Evidence basis: Training materials, policy templates, regulatory frameworks searched for both "sycophancy" and equivalent terms (automation bias, overtrust, overreliance, confirmation reinforcement, acquiescence)
Ambiguities Identified¶
- "Specifically warn" vs. generic "verify outputs" advice — where is the line?
- Whether indirect references (e.g., "don't trust AI blindly") count as sycophancy warnings
- Whether specialized AI safety training (not standard corporate training) counts
- The relationship between sycophancy (AI behavior) and automation bias (human behavior) — they are complementary perspectives on the same interaction pattern
Sub-Questions¶
- Does any corporate training use the term "sycophancy" or explain the concept?
- Does any training use automation bias or overtrust concepts?
- Does any training warn that AI may tell users what they want to hear?
- Does the research base support the need for such training?
- What explains the gap between research and training?
Hypotheses¶
| Hypothesis | Statement | Status |
|---|---|---|
| H1 | Training specifically warns about sycophancy or equivalents | Eliminated |
| H2 | Training does not warn because the concept is too new | Partially supported |
| H3 | Research exists but has not reached training due to lag, commercial disincentives, and regulatory absence | Supported |