Skip to content

Q003-H3 — Training Treats Hallucination as Undifferentiated, Missing the Spectrum and Sycophancy Connection

Statement

Corporate AI training treats hallucination as a single, undifferentiated phenomenon ("AI sometimes gets things wrong, verify outputs") without conveying the spectrum from random fabrication through subtle confirmation of user expectations. No training connects hallucination to sycophancy or explains that some incorrect outputs are generated specifically because they match what the user expects, making them harder to detect.

Status

Supported. This is the best-supported hypothesis. Research clearly identifies a spectrum of hallucination types with varying detection difficulty, and a formal mechanism connecting hallucination to sycophancy through biased sampling. None of this appears in training.

Supporting Evidence

Evidence Summary
SRC01-E01 Rich taxonomy exists in research — detection difficulty ranges from easy to nearly impossible
SRC03-E01 Hallucination and sycophancy connected — "confirmatory hallucination" concept
SRC04-E01 "Hallucinate with us" — co-construction of false reality
SRC05-E01 NIST treats confabulation as undifferentiated despite warning about confabulated justifications
SRC06-E01 Enterprise framing focuses on technical fixes, not spectrum
SRC08-E01 Default chatbot interactions produce "confirmatory evidence" through biased sampling — no training addresses this

Contradicting Evidence

Evidence Summary
SRC02-E01 IBM frames hallucination as fundamental — partially contradicts "undifferentiated" but does not address spectrum or sycophancy

Reasoning

Every aspect of H3 is supported: (1) training treats hallucination as undifferentiated — confirmed across all training materials; (2) the spectrum from random to confirmatory exists in research — confirmed by taxonomy survey and Bayesian analysis; (3) no training connects hallucination to sycophancy — confirmed by absence across all examined materials. The gap between research understanding and training content is stark.

Relationship to Other Hypotheses

H3 synthesizes H1 and H2: training exists and mentions hallucination (closer to H2 than H1's ideal) but fails to convey the sophistication that research has achieved. The critical missing element is the sycophancy connection — the understanding that some hallucinations are generated because they match user expectations.