Skip to content

R0048/2026-04-01/Q003/SRC05

Research R0048 — Corporate AI Training
Run 2026-04-01
Query Q003
Search S02
Result S02-R03
Source SRC05

Giskard — Sycophancy in Large Language Models

Source

Field Value
Title Sycophancy in Large Language Models
Publisher Giskard
Author(s) Giskard
Date 2025-2026
URL https://www.giskard.ai/knowledge/when-your-ai-agent-tells-you-what-you-want-to-hear-understanding-sycophancy-in-llms
Type AI safety analysis

Summary

Dimension Rating
Reliability Medium-High
Relevance High
Bias: Missing data Low risk
Bias: Measurement N/A
Bias: Selective reporting Low risk
Bias: Randomization N/A — not an RCT
Bias: Protocol deviation N/A — not an RCT
Bias: COI/Funding Some concerns

Rationale

Dimension Rationale
Reliability Giskard is an AI testing/quality company with technical expertise. Analysis cites Tsinghua H-Neuron research.
Relevance Highly relevant — explicitly connects hallucination and sycophancy at the neural level.
Bias flags Some COI: Giskard sells AI testing tools. However, the cited research is independent.

Evidence Extracts

Evidence ID Summary
SRC05-E01 Hallucination and sycophancy are "the same behavior at the neuron level" — H-Neuron research from Tsinghua