R0043/2026-03-28/Q002/SRC02/E01¶
FDA automation bias provisions in CDS guidance
URL: https://www.cooley.com/news/insight/2026/2026-01-20-automation-bias-and-clinical-practice-fda-makes-incremental-updates-to-clinical-decision-support-software-guidance
Extract¶
FDA's 2026 revised CDS Software Guidance: - Uses the term "automation bias" explicitly, relocated to the fourth statutory criterion - The fourth criterion requires that software "enable a healthcare professional to independently review the basis for recommendations...but not rely primarily on those recommendations to make a clinical diagnosis" - FDA "continues to support its concern about automation bias by citing only a 2004 journal article" — indicating limited engagement with recent AI-specific sycophancy research - The guidance treats automation bias as a "dependency risk" — situations where physicians might over-rely on algorithmic suggestions - The regulatory solution targets "transparency (enabling independent review of recommendation bases) rather than restricting software capability itself"
JUDGMENT: FDA addresses automation bias more explicitly than most regulators but still treats it as a transparency/independent-review problem, not a system-design problem. The guidance does not require that AI systems be designed to challenge clinician assumptions or avoid sycophantic agreement. Healthcare's "acquiescence problem" (Q001 finding) has no corresponding regulatory requirement.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Partially supports | FDA does explicitly name "automation bias" — but the requirement is for transparency, not system design |
| H2 | Contradicts | A requirement exists |
| H3 | Supports | Transparency requirement rather than behavioral constraint — indirect approach |
Context¶
The FDA's reliance on a 2004 journal article for its automation bias concern suggests the regulatory apparatus has not kept pace with the AI sycophancy research of 2024-2025. The 22-year citation gap is significant.