R0048/2026-03-29/Q003¶
Query¶
What do corporate AI training materials teach about hallucinations? How do they characterize the problem — as occasional random errors, as a fundamental property of the technology, or as a spectrum that includes both fabrication and subtle confirmation of user expectations? Is there any training that connects hallucination to sycophancy or explains that some incorrect outputs are generated specifically because they match what the user expects?
BLUF¶
Corporate AI training treats hallucination as a single, undifferentiated phenomenon: "AI sometimes makes things up; verify your outputs." Some vendor educational content (IBM, NIST) frames hallucination as a fundamental property rather than an occasional glitch, but this framing does not appear in standard training modules. No training material examined conveys the spectrum of hallucination types, their varying detection difficulty, or the connection to sycophancy. Research clearly establishes that sycophantic AI can produce "confirmatory evidence" through biased sampling, misleading even rational users — and that "carefully-selected truths" can produce false beliefs without any fabrication at all. This gap leaves employees able to catch obvious errors but blind to the harder-to-detect forms.
Answer + Confidence¶
Training mentions hallucination but does not explain the spectrum or the sycophancy connection. No training material examined teaches that some AI errors are generated because they match user expectations.
Confidence: Very likely (90%) — High confidence based on convergent evidence from 8 independent sources across academic, government, and commercial sectors.
Summary¶
| Document | Link |
|---|---|
| Query definition | query.md |
| Full assessment | assessment.md |
| ACH matrix | ach-matrix.md |
| Self-audit | self-audit.md |
Hypotheses¶
| ID | Statement | Status |
|---|---|---|
| H1 | Training characterizes hallucination as fundamental with spectrum and sycophancy connection | Eliminated |
| H2 | Training treats hallucination as occasional random errors | Partially supported |
| H3 | Training treats hallucination as undifferentiated, missing spectrum and sycophancy connection | Supported |
Searches¶
| Search | Type | Outcome |
|---|---|---|
| S01 | Hallucination in corporate training | 4 selected / 6 rejected — generic descriptions, no spectrum or sycophancy connection |
| S02 | Hallucination-sycophancy connection | 3 selected / 10 rejected — connection explicit in only a few sources |
| S03 | Academic hallucination taxonomy | 2 selected / 18 rejected — rich research base contrasts with training simplicity |
Sources¶
| Source | Reliability | Relevance | Evidence |
|---|---|---|---|
| SRC01 — Hallucination Taxonomy | High | High | SRC01-E01 |
| SRC02 — IBM Hallucinations | Medium-High | High | SRC02-E01 |
| SRC03 — Hallucination-Sycophancy | Medium-Low | High | SRC03-E01 |
| SRC04 — Exeter Co-Hallucination | Medium-High | High | SRC04-E01 |
| SRC05 — NIST Confabulation | High | High | SRC05-E01 |
| SRC06 — Glean Enterprise | Medium | Medium-High | SRC06-E01 |
| SRC07 — Infomineo Guide | Medium | Medium-High | SRC07-E01 |
| SRC08 — Rational Analysis | Medium-High | High | SRC08-E01 |
Revisit Triggers¶
- Training materials that introduce hallucination spectrum or detection-difficulty concepts
- NIST AI RMF revision that distinguishes hallucination types
- Enterprise vendor educational content that connects hallucination to sycophancy
- Academic publications on effective hallucination training methods for end users
- EU AI Act implementing guidance that specifies hallucination education requirements