R0043/2026-04-01/Q001/SRC05
Multi-university position paper formally distinguishing overreliance, automation bias, and sycophancy
Source
| Field |
Value |
| Title |
Measuring and mitigating overreliance is necessary for building human-compatible AI |
| Publisher |
arXiv (multi-institution) |
| Author(s) |
Lujain Ibrahim, Katherine M. Collins, Sunnie S. Y. Kim, et al. |
| Date |
September 2025 |
| URL |
https://arxiv.org/html/2509.08010v1 |
| Type |
Position paper / Research article |
Summary
| Dimension |
Rating |
| Reliability |
High |
| Relevance |
High |
| Bias: Missing data |
Low risk |
| Bias: Measurement |
Low risk |
| Bias: Selective reporting |
Low risk |
| Bias: Randomization |
N/A -- not an RCT |
| Bias: Protocol deviation |
N/A -- not an RCT |
| Bias: COI/Funding |
Some concerns |
Rationale
| Dimension |
Rationale |
| Reliability |
Authors from Oxford, Cambridge, Princeton, Stanford, Harvard, U. Washington, and OpenAI — high institutional credibility |
| Relevance |
Directly provides the formal terminological distinctions this query requires |
| Bias flags |
OpenAI co-authorship introduces potential COI; however, the paper argues for more measurement of overreliance, which is consistent with OpenAI's stated safety goals |
| Evidence ID |
Summary |
| SRC05-E01 |
Formal definitions distinguishing overreliance (behavior) from automation bias (cognition) from sycophancy (model property) |