R0054/2026-03-31/C003/SRC03
Research on semantic override hallucinations in LLM reasoning.
Source
| Field |
Value |
| Title |
When Models Ignore Definitions: Measuring Semantic Override Hallucinations in LLM Reasoning |
| Publisher |
arXiv |
| Author(s) |
Various |
| Date |
2026-02 |
| URL |
https://arxiv.org/html/2602.17520 |
| Type |
Research paper |
Summary
| Dimension |
Rating |
| Reliability |
High |
| Relevance |
High |
| Bias: Missing data |
Low risk |
| Bias: Measurement |
Low risk |
| Bias: Selective reporting |
Low risk |
| Bias: Randomization |
N/A -- not an RCT |
| Bias: Protocol deviation |
N/A -- not an RCT |
| Bias: COI/Funding |
Low risk |
Rationale
| Dimension |
Rationale |
| Reliability |
Recent academic preprint with systematic experimental methodology. |
| Relevance |
Directly demonstrates the mechanism of models ignoring explicit instructions and reverting to default behavior. |
| Bias flags |
No significant concerns. Standard academic research. |
| Evidence ID |
Summary |
| SRC03-E01 |
Semantic override: models revert to pretrained defaults despite explicit redefinitions, with confident explanations |