Skip to content

Q001-H2 — Training Does Not Cover Limitations

Statement

Standard corporate AI training courses do not meaningfully teach employees about AI limitations, focusing instead on adoption, productivity, and use-case promotion.

Status

Partially supported. Pure omission of limitations is no longer the norm — most training programs include at least brief mentions of hallucinations and the need for verification. However, the treatment is often so superficial that it may not effectively convey the nature or significance of the limitations.

Supporting Evidence

Evidence Summary
SRC05-E01 Only 5 of 12 federal agencies explicitly acknowledge hallucinations
SRC06-E01 Training is generic, one-time, disconnected from practice; 59% skills gap persists
SRC10-E01 >50% of workers find training inadequate

Contradicting Evidence

Evidence Summary
SRC01-E01 NAVEX course explicitly covers risks and limitations
SRC04-E01 UK playbook has explicit limitation warnings
SRC07-E01 Microsoft training addresses hallucinations
SRC08-E01 EU AI Act now legally requires risk coverage
SRC09-E01 Sample policies include hallucination warnings

Reasoning

H2 in its strong form is eliminated by the evidence. Training programs do mention limitations. However, H2 captures an important truth about the functional outcome: the superficiality of coverage means that employees may not actually understand the limitations even though they were nominally "taught."

Relationship to Other Hypotheses

H2 represents the extreme negative position. Evidence shows it is too strong — limitations are mentioned. But the evidence supporting H2 (inadequacy, gaps, generic treatment) feeds directly into H3, the nuanced middle position.