R0048/2026-04-01/Q003 — Query Definition¶
Query as Received¶
What do corporate AI training materials teach about hallucinations? How do they characterize the problem — as occasional random errors, as a fundamental property of the technology, or as a spectrum that includes both fabrication and subtle confirmation of user expectations? Is there any training that connects hallucination to sycophancy or explains that some incorrect outputs are generated specifically because they match what the user expects?
Query as Clarified¶
This query asks three distinct sub-questions:
- What is taught about hallucination? (descriptive — what content exists)
- How is hallucination characterized? (analytical — what framing is used)
- Is the hallucination-sycophancy connection made? (specific — is this link present)
The third sub-question contains an embedded assertion worth surfacing: the claim that "some incorrect outputs are generated specifically because they match what the user expects" presupposes a connection between hallucination and sycophancy. This connection is supported by AI safety research (Tsinghua H-Neuron studies, Giskard analysis) but may not appear in training materials.
BLUF¶
Hallucination is the most widely addressed AI failure mode in corporate training — the DOL framework explicitly names it, and most AI literacy guidance emphasizes output verification. However, training universally characterizes hallucination as random fabrication (the AI "makes things up") rather than as a spectrum that includes user-expectation-confirming errors. No training material found connects hallucination to sycophancy. AI safety research has established that hallucination and sycophancy share neural mechanisms ("fewer than 0.01% of neurons" responsible for both), but this understanding has not entered any training curriculum. Employees are taught to verify AI outputs for random errors but not to guard against outputs that seem correct because they confirm what the employee already believes.
Scope¶
- Domain: AI hallucination training, corporate/government education, hallucination-sycophancy connection
- Timeframe: 2023-2026
- Testability: Verifiable through publicly available training materials and AI safety research
Assessment Summary¶
Probability: N/A (open-ended query)
Confidence: Medium-High
Hypothesis outcome: H2 (hallucination covered as random error; sycophancy connection absent) is best supported.
[Full assessment in assessment.md.]
Status¶
| Field | Value |
|---|---|
| Date created | 2026-04-01 |
| Date completed | 2026-04-01 |
| Researcher profile | Phillip Moore |
| Prompt version | Unified Research Methodology v1 |
| Revisit by | 2026-10-01 |
| Revisit trigger | Any training program connects hallucination to sycophancy; IAPP governance framework incorporates sycophancy; Tsinghua H-Neuron research replicated or refuted |