Skip to content

R0044/2026-03-29/Q002/SRC04/E01

Research R0044 — Expanded Vocabulary Research
Run 2026-03-29
Query Q002
Source SRC04
Evidence SRC04-E01
Type Analytical

Bowtie analysis identifies automation bias in healthcare AI as a systemic risk requiring both design-phase and post-deployment interventions.

URL: https://www.sciencedirect.com/science/article/pii/S2666449624000410

Extract

The study conducts "an in-depth review and Bowtie analysis of automation bias in AI-driven Clinical Decision Support Systems (CDSSs) within healthcare settings." It defines automation bias as "the tendency of human operators to over-rely on automated systems" and presents it as "a critical challenge in AI implementation within clinical environments."

The paper "proposes preventive measures to address automation bias during the design phase of AI model development for CDSSs, along with effective mitigation strategies post-deployment." The findings "highlight the imperative role of a systems approach, integrating technological advancements, regulatory frameworks, and collaborative endeavors between AI developers and healthcare practitioners to diminish automation bias."

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Supports Identifies automation bias as causing systemic risk in healthcare
H2 Contradicts Systematic analysis of the risk confirms it is real and documented
H3 Supports Frames automation bias as human over-reliance on automated systems, not system agreeableness

Context

The Bowtie methodology is a standard risk analysis approach borrowed from process safety engineering. Its application to healthcare AI automation bias suggests the problem is being taken seriously as a systems engineering concern, not just a human factors curiosity.