SRC07-E01 — Microsoft Responsible AI Training Content¶
Extract¶
Microsoft's responsible AI training explores six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The training defines hallucinations as technically "ungrounded content" — "a model has changed the data it's been given or added additional information not contained in it." Guidance recommends "notifying users that AI-generated outputs might contain inaccuracies in initial experiences and including reminders to check AI-generated output for potential inaccuracies throughout use." Employees learn to "evaluate the ethical implications in AI by critically analyzing AI-generated content and cross-verifying information from multiple sources."
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports — training exists and addresses limitations including hallucinations | Strong |
| H2 | Contradicts — training content exists and is publicly available | Moderate |
| H3 | Supports — covers limitations but at principle level; "verify outputs" is generic advice | Moderate |
Context¶
Microsoft Learn modules are free and widely used. Many organizations adopt these as part of their AI training because they are free, structured, and from a major vendor. This makes them a de facto standard for many corporate AI training programs.
Notes¶
The term "ungrounded content" is a technical euphemism that may soften the impact for learners. The guidance to "verify" outputs is generic — it does not explain how verification fails (e.g., when AI outputs match user expectations and thus seem plausible).