Skip to content

R0044/2026-03-29/Q001/H1

Research R0044 — Expanded Vocabulary Research
Run 2026-03-29
Query Q001
Hypothesis H1

Statement

System-side requirements exist in regulated industries: published standards in at least some of the four sectors (defense, healthcare, aviation, financial services) contain explicit requirements constraining AI system behavior to prevent reinforcing user assumptions or providing agreeable-but-incorrect output.

Status

Current: Partially supported

The evidence shows that some regulatory frameworks do impose system-side design requirements related to automation bias prevention. The EU AI Act Article 14 is the most explicit, requiring high-risk AI systems to be "designed and developed in such a way" that they support effective human oversight and help users remain "aware of the possible tendency of automatically relying or over-relying on the output." NIST AI 600-1 identifies "human-AI configuration" risks including automation bias and overreliance as system design concerns with over 200 recommended actions. The FAA framework specifies that automation bias "may be suppressed if other, non-automated sources of information are presented with salience equal to that of the automated information" — a system design requirement. The DoD has directed CDAO to establish "objectivity benchmarks as a primary procurement criterion." However, none of these explicitly address the system agreeing with users or reinforcing assumptions — they focus on transparency, explainability, and human oversight enablement rather than on constraining output agreeableness.

Supporting Evidence

Evidence Summary
SRC01-E01 NIST AI 600-1 identifies human-AI configuration risks including automation bias and overreliance as system design concerns
SRC02-E01 EU AI Act Article 14 requires high-risk systems to be designed to support oversight and awareness of automation bias tendency
SRC04-E01 FAA framework specifies system design approaches to suppress automation bias through equal-salience information presentation
SRC03-E01 DoD AI Strategy directs establishment of objectivity benchmarks as procurement criteria

Contradicting Evidence

Evidence Summary
SRC05-E01 FINRA guidance places responsibility for output validation entirely on human supervision, not system design
SRC06-E01 Research proposes trust-adaptive AI interventions, but these are academic proposals, not adopted requirements

Reasoning

H1 receives partial support because system-side requirements do exist, but they are not as explicit or comprehensive as the hypothesis implies. The strongest examples (EU AI Act, FAA framework) require the system to be designed to enable oversight and reduce automation bias, but they do not prescribe specific output behaviors like "the system must present counterarguments" or "the system must not agree with the user without evidence." The requirements operate at the architecture and design level, not at the output-content level.

Relationship to Other Hypotheses

H1 and H3 overlap significantly. The evidence supports H1 in a weak sense (requirements exist) but more strongly supports H3 (they are indirect and emergent). H2 is partially eliminated — it is not true that no system-side requirements exist, but it is true that the most common regulatory approach remains human-focused.