R0051/2026-03-31/Q001/SRC04/E01¶
Fact-checkers lack structured methodologies for expressing confidence levels.
URL: https://arxiv.org/html/2502.09083v1
Extract¶
Warren et al. (2025) report that fact-checkers lack structured methodologies for expressing confidence levels. One participant expressed confusion about confidence scoring: "What does 65 versus 74 confidence mean?" This indicates that fact-checkers neither use nor understand calibrated confidence language — a core component of frameworks like ICD 203 (seven-point probability scale) and IPCC (calibrated uncertainty language).
The authors identify three essential explanation requirements: the "why" (verdict justification), "how" (reasoning process), and "who" (training data origins). They emphasize that explanations must enable "replicability" — readers should be able to reconstruct fact-checks independently through cited sources and transparent methodology.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Strongly contradicts | Direct practitioner testimony that calibrated confidence methodology does not exist in practice |
| H2 | Supports | Practitioners have implicit evidence evaluation practices but no formal confidence framework |
| H3 | Contradicts | Requirements for replicability show epistemological awareness even without formal frameworks |
Context¶
This is particularly diagnostic evidence. The confusion about what numerical confidence means demonstrates that fact-checkers have not been trained in or exposed to calibrated uncertainty language — a fundamental component of GRADE, IPCC, and ICD 203.