SRC11-E01 — NHS AI Training Requirements¶
Extract¶
NHS guidance requires healthcare workers to have "awareness of the importance of reproducibility and generalisability of AI models, and the risk of data and model bias in AI, which may disadvantage specific groups or reinforce existing health inequalities." Staff must "flag potentially inappropriate, biased or inaccurate outputs and avoid using generative AI tools for decision-making without human oversight." Staff should "never input personal or sensitive information into public AI tools." Implementation "must only be carried out when there has been robust clinical validation."
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports — training guidance covers limitations and mandates human oversight | Strong |
| H2 | Contradicts — clear guidance exists in healthcare sector | Moderate |
| H3 | Supports — guidance is about flagging and oversight, not understanding failure mechanisms | Moderate |
Context¶
NHS represents a high-stakes deployment context where AI errors have direct patient safety implications. Training requirements reflect this elevated risk profile.
Notes¶
NHS guidance is more operationally specific than many corporate policies — staff must "flag" problematic outputs rather than merely "verify." However, like other training materials, the guidance does not address behavioral tendencies like sycophancy or explain why certain AI errors are harder to detect than others.