Q002-H1 — Training Warns About Sycophancy¶
Statement¶
Corporate or government AI training materials specifically warn users about sycophancy or its equivalents (automation bias, overtrust, confirmation reinforcement), explaining that AI may tell users what they want to hear rather than what is true.
Status¶
Eliminated. No corporate or government training material examined includes specific warnings about sycophancy, the RLHF feedback loop, or the tendency of AI to agree with users. The term does not appear in any training curriculum, policy template, or government guidance document examined.
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| (none found) | No training material examined warns about sycophancy |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | IPR calls sycophancy AI's "quietest and most dangerous flaw" — "hidden" implies not in training |
| SRC02-E01 | Georgetown notes firms will not self-regulate sycophancy |
| SRC04-E01 | Sycophancy incident caught users by surprise |
| SRC08-E01 | Even NIST does not name sycophancy as a risk category |
| SRC10-E01 | No legislation or regulation targets sycophancy |
Reasoning¶
Despite extensive searching across consulting firms, government agencies, commercial training providers, policy templates, and regulatory frameworks, no training material was found that warns about sycophancy by name or by equivalent concept. H1 is eliminated.
Relationship to Other Hypotheses¶
H1 is the opposite of H2. The evidence strongly eliminates H1 and supports H2/H3. The distinction between H2 and H3 is about whether the absence is because the concept is not yet known (H2) or because it exists in research but has not reached training (H3). The evidence supports H3.