Skip to content

R0043/2026-03-28/Q002/H1

Research R0043 — Sycophancy Vocabulary
Run 2026-03-28
Query Q002
Hypothesis H1

Statement

Regulated industries have explicit, direct requirements addressing the sycophancy phenomenon under domain-specific names.

Status

Current: Eliminated

No regulated industry has requirements that directly name or target the system behavior of producing agreeable-but-wrong output. Requirements address human oversight (EU AI Act), automation bias awareness (FDA), and calibrated trust (DoD), but none mandate that AI systems be designed not to sycophantically agree with users.

Supporting Evidence

No evidence directly supports H1. The closest is FDA's reference to automation bias, but this is framed as deployer awareness, not system design requirement.

Contradicting Evidence

Evidence Summary
SRC01-E01 EU AI Act requires deployer awareness of automation bias, not system-side behavioral constraints
SRC02-E01 FDA frames automation bias as a transparency obligation, not a system design mandate
SRC03-E01 DoD RAI principles address trustworthiness generally without targeting sycophancy-adjacent system behavior

Reasoning

H1 would require evidence of requirements like "AI systems shall not adjust output to match user expectations" or "clinical decision support shall challenge clinician assumptions." No such requirements were found in any regulatory framework searched. The absence is the finding.

Relationship to Other Hypotheses

H1's elimination supports H3 (indirect requirements exist) over H2 (no requirements at all). Requirements do exist but address the phenomenon from the human-oversight angle rather than the system-behavior angle.