R0043/2026-03-28/Q002 — Assessment¶
BLUF¶
Regulated industries have requirements that address the sycophancy phenomenon, but exclusively through indirect means: human oversight mandates, automation bias awareness obligations, and general trustworthiness criteria. No regulation, standard, or procurement specification found directly requires AI systems to avoid producing agreeable-but-wrong output. The regulatory gap mirrors the vocabulary gap identified in Q001 — because regulators frame the problem as human overreliance (not system sycophancy), they write human-side solutions (train the operator) rather than system-side solutions (constrain the model).
Probability¶
Rating: H3 (indirect requirements) is Very likely (80-95%)
Confidence in assessment: High
Confidence rationale: Evidence from 6 authoritative sources (EU legislation, FDA guidance, DoD principles, NIST framework, cross-jurisdictional taxonomy, financial services framework) consistently shows indirect-only approach. No counter-evidence found.
Reasoning Chain¶
-
The EU AI Act Article 14 is the most explicit regulatory provision: it names "automation bias" and requires deployer awareness and override capability. But the obligation is on deployers, not providers, and mandates awareness, not system design constraints [SRC01-E01, High reliability, High relevance].
-
FDA's 2026 CDS guidance explicitly uses "automation bias" and requires independent review capability. However, it treats this as a transparency obligation and cites a 2004 paper — 22 years behind current sycophancy research [SRC02-E01, Medium-High reliability, High relevance].
-
DoD Responsible AI tenets (Responsible, Equitable, Traceable, Reliable, Governable) are used as procurement evaluation criteria. "Governable" requires human intervention and control mechanisms. None specifically address sycophancy-adjacent system behavior [SRC03-E01, High reliability, Medium-High relevance].
-
NIST AI RMF identifies "overreliance" as a risk category with mitigation guidance but is a voluntary framework, not a binding standard [SRC04-E01, High reliability, Medium-High relevance].
-
The AIR 2024 cross-jurisdictional taxonomy found that "risks associated with AI overreliance or excessive autonomy are less frequently specified in detail" across corporate policies — confirming the gap extends beyond regulation to industry practice [SRC05-E01, Medium-High reliability, High relevance].
-
Financial services has 230 control objectives but none specifically targeting AI agreeableness; controls focus on governance, validation, and monitoring [SRC06-E01, Medium-High reliability, Medium-High relevance].
Evidence Base Summary¶
| Source | Description | Reliability | Relevance | Key Finding |
|---|---|---|---|---|
| SRC01 | EU AI Act Article 14 | High | High | Automation bias awareness mandate — deployer obligation |
| SRC02 | FDA CDS Guidance | Medium-High | High | Transparency requirement citing 2004 research |
| SRC03 | DoD RAI Principles | High | Medium-High | General trustworthiness tenets in procurement |
| SRC04 | NIST GAI Profile | High | Medium-High | Risk identification without binding requirements |
| SRC05 | AIR 2024 Taxonomy | Medium-High | High | Overreliance risks underspecified in corporate policies |
| SRC06 | FSSCC Framework | Medium-High | Medium-High | 230 controls, none sycophancy-specific |
Collection Synthesis¶
| Dimension | Assessment |
|---|---|
| Evidence quality | Robust — legislation, government frameworks, cross-jurisdictional analysis |
| Source agreement | High — all sources confirm indirect-only approach |
| Source independence | High — EU, U.S., and cross-jurisdictional sources |
| Outliers | None — remarkable consistency across all jurisdictions and sectors |
Detail¶
The most significant finding is the consistency: across 4 sectors, 3 jurisdictions, and multiple regulatory approaches, every requirement addresses the sycophancy phenomenon from the human side. This is not coincidence — it reflects the vocabulary finding from Q001. Regulators who frame the problem as "automation bias" (human cognitive failure) naturally write requirements for human awareness and override capability. A regulator who framed it as "sycophancy" (system behavioral failure) would naturally write requirements for system design constraints.
Gaps¶
| Missing Evidence | Impact on Assessment |
|---|---|
| Proprietary enterprise procurement RFPs | Private-sector requirements may be more specific than public regulatory guidance |
| OCC/Fed bank examination guidance for AI | May contain AI-specific requirements not found in public searches |
| Aviation-specific AI deployment standards (EASA, FAA) | Aviation may have more specific automation requirements than found |
| Defense-specific AI system acceptance testing criteria | Classified or FOUO documents may contain sycophancy-relevant requirements |
Researcher Bias Check¶
Declared biases: No researcher profile provided.
Influence assessment: The query assumes requirements exist "under domain-specific names" — presupposing that Q001's vocabulary would map to requirements. This framing was tested by including H2 (no requirements exist) as a hypothesis. H2 was eliminated, confirming that the framing was warranted but not self-fulfilling.
Cross-References¶
| Entity | ID | File |
|---|---|---|
| Hypotheses | H1, H2, H3 | hypotheses/ |
| Sources | SRC01-SRC06 | sources/ |
| ACH Matrix | — | ach-matrix.md |
| Self-Audit | — | self-audit.md |