R0044/2026-04-01/Q001/SRC01/E01¶
CSET's three-tiered framework for understanding and addressing automation bias
URL: https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/
Extract¶
CSET provides a framework that examines automation bias at three levels:
- User level: Individual cognitive tendencies to over-rely on automated system outputs, including the tendency to accept AI recommendations without independent verification.
- Technical design level: System design factors that influence automation bias, including how AI outputs are presented, the availability of explanations, and interface design choices.
- Organizational level: Institutional policies, training requirements, and operational procedures that shape how individuals interact with AI systems.
The brief states that "automation bias can endanger the successful use of artificial intelligence by eroding the user's ability to meaningfully control an AI system" and presents case studies illustrating how these three factors interact. The report explicitly calls for addressing automation bias at the technical/design level — not just through human training — making it one of the few policy documents to recognize that system design itself contributes to automation bias.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports weakly | Identifies technical design as a factor but does not cite specific enforceable requirements constraining system output |
| H2 | Supports | Recognizes the need for system-level intervention, consistent with emerging but incomplete regulatory posture |
| H3 | Contradicts | Demonstrates that at least some policy research explicitly addresses the system/technical level |
Context¶
This issue brief represents policy analysis rather than regulation itself. It identifies what requirements should exist rather than documenting what requirements do exist. Its significance is in demonstrating that the policy research community has recognized the system-side gap.