Skip to content

R0044/2026-03-29/Q002/SRC03/E01

Research R0044 — Expanded Vocabulary Research
Run 2026-03-29
Query Q002
Source SRC03
Evidence SRC03-E01
Type Reported

Clinical evidence that automation bias in AI-driven CDS leads to measurable patient harm, including disparate misdiagnosis rates.

URL: https://jamanetwork.com/journals/jama/article-abstract/2812931

Extract

"In settings where the model is inaccurate — either systematically biased or due to imperfect performance — patients may be harmed as clinicians defer to the AI model over their own judgment." AI misdiagnosis rates for minority patients were 31% higher than for majority patients in critical care settings (REPORTED: from secondary sources citing related JAMA studies).

Physician performance can be harmed by following incorrect predictions, a phenomenon known as overreliance. Systematically biased AI models can lead to errors and potential harm. AI algorithms trained using chest X-rays consistently underdiagnosed pulmonary abnormalities in historically under-served patient populations.

ECRI identified AI-fueled misdiagnoses as one of 2026's top patient safety threats.

JUDGMENT: The harm documented here is from automation bias (clinician deference) rather than from AI systems designed to agree with clinicians. The AI systems provided incorrect recommendations, and clinicians accepted them uncritically. This is the interaction-pattern harm described in H3, not the system-side sycophancy described in H1.

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Supports Documents measurable patient harm from over-reliance on AI
H2 Contradicts Clear evidence of documented harm
H3 Supports The harm mechanism is clinician deference (automation bias), not system-designed agreeableness

Context

The healthcare evidence base for automation bias is the largest and most rigorous of any sector examined. However, the harm mechanism is consistently automation bias (human over-reliance) rather than sycophancy (system agreeableness).