Skip to content

R0044/2026-04-01/Q001/SRC03/E01

Research R0044 — Expanded Vocabulary Research
Run 2026-04-01
Query Q001
Source SRC03
Evidence SRC03-E01
Type Factual

NIST AI 600-1 identifies confabulation, overreliance, and human-AI configuration as GenAI-specific risks

URL: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf

Extract

NIST AI 600-1 identifies 12 risks unique to or exacerbated by generative AI, including:

  • Confabulation: "Confidently stated but erroneous or false content." Unlike traditional software errors, confabulations are "presented with the same linguistic confidence as accurate information, making them particularly dangerous in high-stakes domains like healthcare diagnosis, legal research, and financial analysis."
  • Human-AI Configuration: Arrangements of or interactions between humans and AI systems that can result in "algorithmic aversion, automation bias, or inappropriate anthropomorphizing of, overreliance on, or emotional entanglement with AI systems."
  • Overreliance: The framework explicitly states that "humans may over-rely on AI systems or may unjustifiably perceive AI content to be of higher quality than other sources. This phenomenon is an example of automation bias, or excessive deference to automated systems."

The framework uses a Govern-Map-Measure-Manage structure with over 200 suggested actions, including pre-deployment testing, content provenance verification, and the ability to "halt development or deployment if unacceptable negative risks emerge."

Critical observation: NIST identifies these as risks to be managed but does not prescribe specific system-side behavioral constraints. The suggested actions focus on testing, monitoring, and governance processes rather than constraining how the AI generates output.

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Supports weakly Identifies the right risks but provides suggested actions, not enforceable requirements
H2 Supports strongly Exemplifies the emerging pattern — risks are recognized, mitigations are suggested, but specific output-behavior requirements are absent
H3 Contradicts NIST clearly addresses system-level risks, not just human-side behavior

Context

NIST AI 600-1 is voluntary — it provides a risk management framework, not binding regulations. Its significance is that the US federal standards body has officially recognized confabulation and overreliance as GenAI-specific risks, creating a foundation for future binding requirements. The framework's vocabulary includes "automation bias" and "overreliance" but not "sycophancy."