Skip to content

R0053/2026-03-31-02/C003/SRC01

Research R0053 — Prompt Claims
Run 2026-03-31-02
Claim C003
Search S02
Result S02-R02
Source SRC01

Sharma et al. — foundational paper on LLM sycophancy (ICLR 2024)

Source

Field Value
Title Towards Understanding Sycophancy in Language Models
Publisher ICLR 2024
Author(s) Mrinank Sharma et al.
Date 2023 (updated May 2025)
URL https://arxiv.org/abs/2310.13548
Type Research paper

Summary

Dimension Rating
Reliability High
Relevance High
Bias: Missing data Low risk
Bias: Measurement Low risk
Bias: Selective reporting Low risk
Bias: Randomization N/A — not an RCT
Bias: Protocol deviation N/A — not an RCT
Bias: COI/Funding Some concerns

Rationale

Dimension Rationale
Reliability Published at ICLR 2024, a top ML venue. Multi-author with rigorous methodology.
Relevance Directly studies the sycophancy phenomenon described in the claim.
Bias flags Some COI concerns — authors affiliated with Anthropic, which has commercial interest in understanding sycophancy to improve its product.

Evidence Extracts

Evidence ID Summary
SRC01-E01 Systematic sycophancy across five AI models and four tasks