R0048/2026-03-29/Q001¶
Query¶
What do standard corporate AI training courses and onboarding materials teach employees about AI limitations? Look for publicly available training curricula, guidelines, or summaries from major employers, consulting firms (Deloitte, McKinsey, PwC, KPMG), government agencies (GSA, DoD, NHS), and training providers. What specific warnings or guidance do they include about AI reliability, accuracy, and behavioral tendencies?
BLUF¶
Corporate and government AI training programs are widespread (82% of enterprises provide some form) and most mention AI limitations — typically hallucinations and the need to verify outputs. However, coverage is consistently superficial: 1-2 sentence warnings without explaining failure mechanisms, the spectrum of hallucination types, or behavioral tendencies like sycophancy. Workers confirm this: more than half report their AI training is inadequate. The result is a 59% skills gap despite near-universal training availability.
Answer + Confidence¶
Training mentions limitations but treats them as checkbox warnings rather than core competencies. No training material examined explains why some AI errors are harder to detect than others, or that AI may generate outputs specifically because they match user expectations.
Confidence: Very likely (85%) — High confidence based on convergent evidence from 11 independent sources.
Summary¶
| Document | Link |
|---|---|
| Query definition | query.md |
| Full assessment | assessment.md |
| ACH matrix | ach-matrix.md |
| Self-audit | self-audit.md |
Hypotheses¶
| ID | Statement | Status |
|---|---|---|
| H1 | Training adequately covers limitations with actionable warnings | Partially supported |
| H2 | Training does not meaningfully cover limitations | Partially supported |
| H3 | Training mentions limitations superficially without explaining mechanisms | Supported |
Searches¶
| Search | Type | Outcome |
|---|---|---|
| S01 | Corporate AI training programs | 3 selected / 7 rejected — most results conflated "AI in training" with "training about AI" |
| S02 | Consulting firms and government agencies | 8 selected / 42 rejected — government sources more transparent than consulting firms |
| S03 | Policy templates and regulatory frameworks | 5 selected / 45 rejected — templates consistently brief on limitations |
Sources¶
| Source | Reliability | Relevance | Evidence |
|---|---|---|---|
| SRC01 — NAVEX AI Training | Medium | High | SRC01-E01 |
| SRC02 — GSA Training Series | Medium-High | High | SRC02-E01 |
| SRC03 — Deloitte AI Academy | Medium | High | SRC03-E01 |
| SRC04 — UK AI Playbook | High | High | SRC04-E01 |
| SRC05 — GAO Federal AI | High | High | SRC05-E01 |
| SRC06 — DataCamp Training Gaps | Medium | High | SRC06-E01 |
| SRC07 — Microsoft Responsible AI | Medium-High | High | SRC07-E01 |
| SRC08 — EU AI Act Article 4 | High | High | SRC08-E01 |
| SRC09 — Fisher Phillips Policy | Medium-High | High | SRC09-E01 |
| SRC10 — WEF Mitigate Risks | Medium-High | High | SRC10-E01 |
| SRC11 — NHS AI Training | High | Medium-High | SRC11-E01 |
Revisit Triggers¶
- Publication of specific training module content from major providers
- Post-training knowledge assessment studies measuring what employees actually learned
- EU AI Act enforcement actions clarifying "sufficient" AI literacy standards
- Updates to NIST AI RMF or UK AI Playbook with more specific training requirements