Skip to content

R0043/2026-04-01/Q001/SRC06

Research R0043 — Sycophancy Vocabulary
Run 2026-04-01
Query Q001
Search S05
Result S05-R02
Source SRC06

Marburg University study on acquiescence bias in LLMs — surprising counter-finding

Source

Field Value
Title Acquiescence Bias in Large Language Models
Publisher arXiv
Author(s) Daniel Braun
Date September 2025
URL https://arxiv.org/abs/2509.08480
Type Research paper

Summary

Dimension Rating
Reliability Medium-High
Relevance Medium
Bias: Missing data Low risk
Bias: Measurement Low risk
Bias: Selective reporting Low risk
Bias: Randomization N/A -- not an RCT
Bias: Protocol deviation N/A -- not an RCT
Bias: COI/Funding Low risk

Rationale

Dimension Rationale
Reliability Peer-reviewed arXiv submission with transparent methodology (37,975 question variations, 5 models, 3 languages)
Relevance Directly tests whether "acquiescence bias" (a survey methodology term) applies to LLMs; relevant to vocabulary mapping
Bias flags Single-author study from one institution; findings require replication

Evidence Extracts

Evidence ID Summary
SRC06-E01 Counter-finding: LLMs show "no" bias (opposite of acquiescence), complicating the vocabulary mapping