R0057/2026-04-01/C021 — Claim Definition¶
Claim as Received¶
AI safety researchers call the problem sycophancy while regulated industries call it automation bias, automation complacency, overtrust, overreliance, or acquiescence.
Claim as Clarified¶
AI safety researchers call the problem sycophancy while regulated industries call it automation bias, automation complacency, overtrust, overreliance, or acquiescence.
BLUF¶
Confirmed. The vocabulary split is well-documented. AI safety/ML research uses 'sycophancy'; human factors engineering, aviation, and healthcare use 'automation bias', 'automation complacency', 'overtrust', and 'overreliance'. The terms describe overlapping phenomena from different perspectives.
Scope¶
- Domain: AI sycophancy research
- Timeframe: Current (2024-2026)
- Testability: Verifiable against published research and public records
Assessment Summary¶
Probability: Very likely (80-95%)
Confidence: High
Hypothesis outcome: H1 is supported based on available evidence.
[Full assessment in assessment.md.]
Status¶
| Field | Value |
|---|---|
| Date created | 2026-04-01 |
| Date completed | 2026-04-01 |
| Researcher profile | Phillip Moore |
| Prompt version | Unified Research Methodology v1 |
| Revisit by | 2027-04-01 |
| Revisit trigger | If a shared vocabulary emerges that bridges these communities |