Skip to content

R0044/2026-04-01/Q001/SRC01

Research R0044 — Expanded Vocabulary Research
Run 2026-04-01
Query Q001
Search S01
Result S01-R01
Source SRC01

CSET Georgetown — AI Safety and Automation Bias Issue Brief

Source

Field Value
Title AI Safety and Automation Bias: The Downside of
Publisher Center for Security and Emerging Technology, Georgetown University
Author(s) Lauren Kahn, Emelia S. Probasco, Ronnie Kinoshita
Date November 2024
URL https://cset.georgetown.edu/publication/ai-safety-and-automation-bias/
Type Policy research / Issue brief

Summary

Dimension Rating
Reliability High
Relevance High
Bias: Missing data Low risk
Bias: Measurement Low risk
Bias: Selective reporting Low risk
Bias: Randomization N/A — not an RCT
Bias: Protocol deviation N/A — not an RCT
Bias: COI/Funding Low risk

Rationale

Dimension Rationale
Reliability CSET is a respected policy research center at Georgetown University with a track record of rigorous AI policy analysis. The brief uses a three-tiered framework (user, technical, organizational) grounded in existing literature.
Relevance Directly addresses automation bias in AI systems using the exact vocabulary from the research query. Provides case studies and a framework that distinguishes between user-side and technical/system-side factors.
Bias flags No significant bias flags. CSET is a nonpartisan research center funded by Open Philanthropy.

Evidence Extracts

Evidence ID Summary
SRC01-E01 Three-tiered framework for automation bias: user, technical design, and organizational factors