SRC07-E01 — Overreliance Mechanisms and Mitigations¶
Extract¶
The review analyzed ~60 papers and found that "overreliance on AI occurs when users start accepting incorrect AI outputs" and "makes it difficult for users to meaningfully leverage the strengths of AI systems and to oversee their weaknesses." Recommended mitigations include: ensuring "users have at least a minimum knowledge of how AI features work," using "onboarding techniques and tutorials to make users aware that overreliance is a common phenomenon," nudging "users to engage in meta-cognition," and monitoring "user overreliance post deployment as trust in AI fluctuates over time." The review emphasizes that "an important goal of AI system design is to empower users to develop appropriate reliance on AI."
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Contradicts — recommendations exist but are not in standard training | Moderate |
| H2 | Supports — the research recommends training about overreliance, implying it does not exist | Moderate |
| H3 | Supports — overreliance is a well-researched topic with specific mitigations, but these do not appear in standard training | Strong |
Context¶
This Microsoft Research review is important because it comes from within a major AI vendor and recommends specific training-based interventions for overreliance. The fact that Microsoft's own Responsible AI training modules do not implement these recommendations is notable.
Notes¶
The gap between Microsoft Research's recommendations (specific: awareness training, meta-cognition nudges, post-deployment monitoring) and Microsoft's actual training products (generic: "verify outputs") illustrates the broader research-to-practice gap across the industry.