R0040/2026-04-01/Q002/S05¶
WebSearch — Sycophancy harms and industry response
Summary¶
| Field | Value |
|---|---|
| Source/Database | WebSearch |
| Query terms | Stanford study AI chatbot sycophancy harm 2026 findings; Anthropic Claude sycophancy reduction Constitutional AI principles honesty 2025 2026 |
| Filters | None |
| Results returned | 20 (two searches combined) |
| Results selected | 4 |
| Results rejected | 16 |
Selected Results¶
| Result | Title | URL | Rationale |
|---|---|---|---|
| S05-R01 | Sycophantic AI decreases prosocial intentions (Science) | https://www.science.org/doi/10.1126/science.aec8352 | High-impact empirical study in Science |
| S05-R02 | Programmed to Please (Springer) | https://link.springer.com/article/10.1007/s43681-026-01007-4 | Philosophy of sycophancy as fundamental problem |
| S05-R03 | Claude's New Constitution (Anthropic) | https://www.anthropic.com/news/claude-new-constitution | Anthropic's response including anti-sycophancy principles |
| S05-R04 | Stanford Study (Fortune) | https://fortune.com/2026/03/31/ai-tech-sycophantic-regulations-openai-chatgpt-gemini-claude-anthropic-american-politics/ | Industry context on sycophancy across all major models |
Rejected Results¶
Notes¶
The Stanford/Science study (March 2026) is a landmark finding -- published in one of the world's top scientific journals, showing AI affirms user actions 49% more often than humans, with measurable harm to prosocial behavior. This elevates sycophancy from a technical concern to a societal one.