R0044/2026-03-29/Q001/SRC01/E01¶
NIST AI 600-1 identifies human-AI configuration as a distinct risk category for generative AI systems, with recommended actions addressing automation bias and overreliance.
URL: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf
Extract¶
NIST AI 600-1 describes 12 risks unique to or exacerbated by generative AI. One of these is human-AI configuration — defined as arrangements of or interactions between a human and an AI system which can result in "algorithmic aversion, automation bias, or inappropriate anthropomorphizing of, overreliance on, or emotional entanglement with GAI systems."
The profile provides more than 200 actions across twelve different risk categories for AI developers to consider when managing risks. NIST urges organizations to evaluate these risks from the earliest stages of system design and documentation.
Key distinction: these are recommended actions, not mandatory requirements. NIST AI 600-1 is a risk management framework, not a regulation. However, it is referenced by other regulatory frameworks (including OMB M-24-10 and M-26-04) and serves as the de facto technical standard for U.S. government AI deployments.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | Demonstrates that system-side risk identification exists at the standards level, including automation bias as a design concern |
| H2 | Contradicts | Directly contradicts the claim that no system-side requirements exist — NIST provides system design actions |
| H3 | Supports | The actions are recommended, not mandatory; they address risk categories but do not prescribe specific output behaviors like "do not agree with the user" |
Context¶
NIST AI 600-1 is the companion document to the broader NIST AI Risk Management Framework (AI RMF). While not legally binding on its own, it is incorporated by reference into government procurement through OMB memoranda and increasingly by private sector adoption. Its identification of automation bias as a system design concern (not just a human training concern) represents a significant step toward system-side requirements.
Notes¶
The document could not be fully extracted due to PDF encoding issues. Analysis is based on published summaries and the search result descriptions, which consistently describe the 12 risk categories and 200+ recommended actions.