R0043/2026-03-28/Q001/SRC06/E01¶
NIST AI RMF vocabulary for overreliance and automation bias
URL: https://www.nist.gov/itl/ai-risk-management-framework
Extract¶
The NIST Generative AI Profile (NIST-AI-600-1, July 2024) identifies 12 risk categories including "human-AI configuration issues" that can result in: - Algorithmic aversion: Users rejecting AI recommendations regardless of quality - Automation bias: Users accepting AI recommendations regardless of quality - Inappropriate anthropomorphizing: Users attributing human-like qualities to AI systems - Overreliance: Users depending on AI output without appropriate verification - Emotional entanglement: Users forming inappropriate emotional bonds with AI systems
NIST's trustworthiness characteristics: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair.
JUDGMENT: NIST's vocabulary is richer than the EU AI Act's single term but still focuses entirely on human-side phenomena. "Overreliance" and "automation bias" describe what users do wrong, not what systems do wrong. The closest system-side concept in NIST is the trustworthiness characteristic "valid and reliable" — but this is a property assertion, not a behavioral description equivalent to "sycophancy."
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | NIST has a distinct vocabulary including terms not used in EU AI Act |
| H2 | Contradicts | Multiple named concepts address the phenomenon |
| H3 | Strongly supports | All 5 risk terms are human-side; no system-behavior equivalent exists in the framework |
Context¶
NIST's inclusion of "emotional entanglement" and "inappropriate anthropomorphizing" alongside "automation bias" shows the framework recognizes multiple pathways to overreliance, but none map to the AI safety concept of the system deliberately adjusting its output to please the user.