R0044/2026-03-29/Q001/SRC06
"Adjust for Trust: Mitigating Trust-Induced Inappropriate Reliance on AI Assistance" (2025)
Source
| Field |
Value |
| Title |
Adjust for Trust: Mitigating Trust-Induced Inappropriate Reliance on AI Assistance |
| Publisher |
arXiv (preprint) |
| Author(s) |
Not fully extracted |
| Date |
February 2025 |
| URL |
https://arxiv.org/html/2502.13321v2 |
| Type |
Research paper (preprint) |
Summary
| Dimension |
Rating |
| Reliability |
Medium |
| Relevance |
Medium-High |
| Bias: Missing data |
Low risk |
| Bias: Measurement |
Some concerns |
| Bias: Selective reporting |
Low risk |
| Bias: Randomization |
N/A — not an RCT |
| Bias: Protocol deviation |
N/A — not an RCT |
| Bias: COI/Funding |
Low risk |
Rationale
| Dimension |
Rationale |
| Reliability |
Preprint — not yet peer-reviewed. However, methodology appears sound with empirical results (13-31% reduction in under-reliance, 10-23% reduction in over-reliance). |
| Relevance |
Directly proposes system-side interventions where AI adapts its output based on detected user trust levels. The most concrete system-side design proposal found in this research, though it is academic, not regulatory. |
| Bias flags |
Trust heuristics show only moderate correlation (0.51) with actual user trust, a significant limitation acknowledged by the authors. |
| Evidence ID |
Summary |
| SRC06-E01 |
Proposes trust-adaptive AI that provides counter-explanations during moments of high trust, reducing over-reliance by 10-23% |