R0044/2026-03-29/Q001/S05
WebSearch — NIST AI 600-1 and EU AI Act automation bias system design requirements
Summary
| Field |
Value |
| Source/Database |
WebSearch |
| Query terms |
NIST AI 600-1 "automation bias" "overreliance" system design requirements mitigate sycophancy 2024 2025; EU AI Act "automation bias" system requirements "high-risk" prevent AI agreeing with users transparency |
| Filters |
None |
| Results returned |
20 (combined two searches) |
| Results selected |
4 |
| Results rejected |
16 |
Selected Results
| Result |
Title |
URL |
Rationale |
| S05-R01 |
NIST AI 600-1 GAI Profile |
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf |
Primary NIST standard addressing automation bias as system design risk |
| S05-R02 |
EU AI Act Article 14: Human Oversight |
https://artificialintelligenceact.eu/article/14/ |
Most explicit system-side requirement for automation bias prevention |
| S05-R03 |
Adjust for Trust: Trust-adaptive AI |
https://arxiv.org/html/2502.13321v2 |
System-side intervention proposal with empirical evidence |
| S05-R04 |
Trust Calibration Maturity Model (Sandia) |
https://arxiv.org/abs/2503.15511 |
Framework for communicating AI trustworthiness — human-side only |
Rejected Results
| Result |
Title |
URL |
Rationale |
| S05-R05 |
WilmerHale: NIST AI Risk Mitigation |
https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240808-nist-issues-new-ai-risk-mitigation-guidelines-and-software |
Secondary coverage |
| S05-R06 |
EU AI Act Article 6 (Classification Rules) |
https://artificialintelligenceact.eu/article/6/ |
Classification rules, not system behavior requirements |
| S05-R07 |
EU AI Act Article 50 (Transparency) |
https://artificialintelligenceact.eu/article/50/ |
Transparency disclosure, not automation bias prevention |
| S05-R08 |
EU AI Act Article 13 (Transparency for Deployers) |
https://artificialintelligenceact.eu/article/13/ |
Deployer transparency, less specific than Article 14 |
| S05-R09 |
EU AI Act high-level summary |
https://artificialintelligenceact.eu/high-level-summary/ |
Summary overview |
| S05-R10 |
ISACA: Understanding EU AI Act |
https://www.isaca.org/resources/white-papers/2024/understanding-the-eu-ai-act |
Industry guidance document |
Notes
This combined search yielded the two strongest system-side requirements (NIST AI 600-1 and EU AI Act Article 14) plus the most interesting academic proposal (trust-adaptive AI). The Trust Calibration Maturity Model from Sandia was included as selected because it represents the frontier of how trustworthiness could be communicated, even though it is human-side only.