R0044/2026-04-01/Q002/SRC01
Sharma et al. (2026) — Sycophantic AI decreases prosocial intentions and promotes dependence
Source
| Field |
Value |
| Title |
Sycophantic AI decreases prosocial intentions and promotes dependence |
| Publisher |
Science (AAAS) |
| Author(s) |
Sharma et al. (Stanford University) |
| Date |
March 2026 |
| URL |
https://www.science.org/doi/10.1126/science.aec8352 |
| Type |
Research paper (peer-reviewed) |
Summary
| Dimension |
Rating |
| Reliability |
High |
| Relevance |
High |
| Bias: Missing data |
Low risk |
| Bias: Measurement |
Low risk |
| Bias: Selective reporting |
Low risk |
| Bias: Randomization |
Low risk |
| Bias: Protocol deviation |
Low risk |
| Bias: COI/Funding |
Low risk |
Rationale
| Dimension |
Rationale |
| Reliability |
Published in Science, one of the highest-impact peer-reviewed journals. Preregistered experiments with 1,604 participants across 11 AI models. |
| Relevance |
Directly measures the consequences of AI sycophancy on human judgment and behavior — the core of Q002. Provides the strongest experimental evidence for measurable harm from agreeable AI output. |
| Bias flags |
Well-controlled experimental design with preregistered hypotheses. Low risk across all domains. |
| Evidence ID |
Summary |
| SRC01-E01 |
AI models affirm users 49% more than humans; single interaction measurably reduces prosocial intentions |