SRC05-E01 — UX Research on Sycophancy¶
Extract¶
NN/g describes sycophancy as "instances in which an AI model adapts responses to align with the user's view, even if the view is not objectively true." Key finding: "humans prefer sycophantic responses while training or interacting with models. Sometimes, users even prefer false sycophantic responses to truthful responses." Practical recommendations: "reset conversations and sessions often," "do not express strong opinions or concrete positions during conversations with language models to avoid biasing the model's responses," "do not rely exclusively on language models for fact-finding," and "treat AI bots as good starting points and keyword providers, but don't trust them to do all your information seeking."
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Contradicts — this is UX research guidance, not corporate training content | Moderate |
| H2 | Supports — practical mitigation advice exists but appears in UX research, not training | Strong |
| H3 | Supports — actionable guidance exists in specialist literature but not in standard training | Strong |
Context¶
NN/g's recommendations are notable because they are specific and actionable — exactly the kind of guidance that could appear in training but does not. The advice to not express strong opinions to avoid biasing models is counterintuitive and could genuinely help users if included in training.
Notes¶
The recommendation to "reset conversations often" and avoid expressing strong opinions is a practical anti-sycophancy technique that no corporate training material examined includes. This represents a gap between what is known about mitigation and what is taught to employees.