Skip to content

R0044/2026-04-01/Q002/S02

WebSearch — Healthcare AI errors, false confirmation, and clinical decision support harm

Summary

Field Value
Source/Database WebSearch
Query terms "AI agreeable output harm incident reports near-miss healthcare clinical decision support wrong diagnosis" + '"alert fatigue" "commission error" AI clinical decision support system design requirements preventing confirmation bias'
Filters None
Results returned 20
Results selected 1
Results rejected 19

Selected Results

Result Title URL Rationale
S02-R01 False conflict and false confirmation errors (Nature Communications) https://www.nature.com/articles/s41467-024-50952-3 Peer-reviewed study directly addressing false confirmation in clinical AI

Rejected Results

Result Title URL Rationale
S02-R02 Various healthcare AI error articles (19 results) Various Included: general AI diagnostic error articles, malpractice liability discussions, alert fatigue studies, and bias recognition reviews. Most address AI being wrong rather than AI agreeing with incorrect user assumptions. The false confirmation study from Nature Communications was the only source specifically addressing the agreement mechanism.

Notes

Healthcare AI error literature is extensive but primarily focused on AI producing incorrect output (misdiagnosis, biased recommendations) rather than AI confirming incorrect human assumptions. The false confirmation study from Nature Communications was the key discriminating find — it specifically addresses the mechanism where AI agreement with an incorrect clinician hypothesis leads to harm.