R0048/2026-04-01/Q002/H3¶
Statement¶
Neither sycophancy nor any functionally equivalent concept is addressed in corporate or government AI training. Training treats AI as a passive tool that may produce errors, without recognizing that AI systems have active behavioral tendencies that influence users.
Status¶
Current: Partially supported
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | Georgetown paper frames sycophancy as an unaddressed risk requiring new interventions |
| SRC04-E01 | Science study demonstrates sycophancy is measurably worse than human behavior, yet training does not address it |
| SRC02-E01 | IPR describes sycophancy as a "hidden risk" in the workplace — hidden because training does not surface it |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC06-E01 | NHS framework names automation bias — a functionally related concept |
| SRC05-E01 | Some training covers scenarios where AI can fail, which is tangentially related |
Reasoning¶
H3 is partially supported for the corporate training landscape generally but overstates the case for healthcare-specific training. The NHS framework's explicit naming of automation bias means H3 cannot be fully supported — at least one program addresses a functionally related concept. However, for the vast majority of corporate training, H3 accurately describes the situation: AI is treated as a passive error-prone tool rather than an active behavioral agent.
Relationship to Other Hypotheses¶
H3 is close to H2 but draws a sharper line. The key distinction is whether adjacent concepts (automation bias) provide meaningful sycophancy protection. H3 says no; H2 says partially. The evidence favors H2 slightly, because the NHS automation bias coverage does provide some relevant awareness even if it does not cover sycophancy directly.