Q001 — Assessment¶
Query¶
Has the fact-checking community developed any formal epistemological framework for evidence evaluation comparable to GRADE (certainty of evidence), IPCC (calibrated uncertainty language), or ICD 203 (analytical tradecraft standards)?
Hypotheses Tested¶
- H1 (Affirmative): Fact-checking has developed formal epistemological frameworks comparable to GRADE/IPCC/ICD 203.
- H2 (Negative): No such formal framework exists; fact-checking relies on ad-hoc, organization-specific rating scales.
- H3 (Nuanced): Partial frameworks exist (e.g., ClaimReview schema, IFCN code of principles) but they address process/transparency rather than epistemological evidence evaluation.
Finding¶
H3 is best supported by the evidence, with strong leanings toward H2.
The fact-checking community has NOT developed any formal epistemological framework for evidence evaluation comparable to GRADE, IPCC calibrated uncertainty language, or ICD 203 analytical tradecraft standards. What exists instead is a set of process-transparency standards and organizational norms that address different concerns than evidence quality evaluation.
Confidence: HIGH¶
Using ICD 203 probability language: It is almost certain (95%+) that no formal evidence evaluation framework comparable to GRADE, IPCC, or ICD 203 exists in fact-checking.
This confidence level is justified by:
- Multiple targeted searches specifically designed to find such a framework returned no results (Searches 2, 7, 8, 9).
- Comprehensive academic surveys of fact-checking methodology (SRC-Q1-06, SRC-Q1-13, SRC-Q1-16) do not reference any such framework.
- The leading practitioner standard (IFCN Code of Principles, SRC-Q1-08) addresses process transparency, not evidence quality grading.
- Multiple independent academic papers note methodological gaps and inconsistencies that such a framework would address (SRC-Q1-02, SRC-Q1-03, SRC-Q1-05, SRC-Q1-14, SRC-Q1-15).
Evidence Synthesis¶
What exists in fact-checking (partial frameworks)¶
-
IFCN Code of Principles (SRC-Q1-08): Five process-transparency commitments. Addresses organizational behavior (nonpartisanship, source disclosure, funding transparency, methodology transparency, corrections policy). Does NOT address: how to weight evidence, how to express uncertainty, how to assess source reliability hierarchically, or how to identify and mitigate cognitive bias in analysis.
-
ClaimReview Schema (SRC-Q1-10, SRC-Q1-11): Structured data standard for encoding fact-check verdicts. Captures claim, claimant, and verdict. Each organization supplies its own rating scale. No shared evidence quality vocabulary. No confidence/certainty field.
-
Shared practitioner epistemology (SRC-Q1-16): Koliska & Roberts (2024) found "isomorphic norms" across 40 organizations — shared belief in objective truth, transparent reproducible process, public service mission. These are INFORMAL institutional norms, not a CODIFIED framework with explicit criteria.
-
Ad hoc convergence in visual verification (SRC-Q1-04): Grut (2026) documents Norwegian fact-checkers adopting OSINT and intelligence practices for visual verification, producing "substantiated verification" that goes beyond true/false. This is domain-specific, localized, and not codified as a transferable framework.
What is absent (the gap)¶
Compared to the three reference frameworks:
| Feature | GRADE | IPCC | ICD 203 | Fact-Checking |
|---|---|---|---|---|
| Hierarchical evidence quality scale | Yes (high/moderate/low/very low) | Yes (robust/medium/limited evidence) | Yes (source credibility requirements) | No |
| Calibrated confidence/uncertainty language | Implicit in quality levels | Yes (virtually certain to exceptionally unlikely) | Yes (probability language requirements) | No |
| Structured bias assessment | Yes (5 downgrade domains + 3 upgrade criteria) | Yes (agreement dimension) | Yes (assumption identification, alternative analysis) | No — IFCN requires nonpartisanship but no structured assessment |
| Source reliability tiering | Inherent in study design hierarchy | Yes (evidence type consideration) | Yes (source description requirements) | No — sources used ad hoc |
| Formal methodology document | Yes (GRADE Handbook) | Yes (Guidance Note for Lead Authors) | Yes (ICD-203 directive) | No — IFCN Code addresses process, not epistemology |
Automated fact-checking compounds the gap¶
The computational fact-checking pipeline (SRC-Q1-06, SRC-Q1-07, SRC-Q1-13) mirrors the human gap. The standard three-stage pipeline (claim detection, evidence retrieval, verdict prediction) does not include an evidence quality assessment stage. Vladika & Matthes (SRC-Q1-13) explicitly note that no existing datasets account for "differing levels and strength of evidence."
Counter-evidence considered¶
-
Seeck et al. (2025) (SRC-Q1-01) propose "An Epistemological Framework" for fact-checking. However, this is a DESCRIPTIVE analytical framework identifying three challenges (objectivism, truth regimes, causal relations) — it is not a PRESCRIPTIVE evidence evaluation methodology. It helps scholars analyze fact-checking's epistemological tensions; it does not tell practitioners how to grade evidence.
-
Koliska & Roberts (2024) (SRC-Q1-16) document shared epistemological beliefs among fact-checkers. However, shared beliefs ≠ codified framework. The beliefs are tacit professional norms transmitted through institutional isomorphism (IFCN certification, Global Fact conferences), not an explicit methodology document.
-
Amazeen (2015) (SRC-Q1-09) defends fact-checking methods as more rigorous than Uscinski & Butler claimed. However, her defense appeals to professional practice standards, not to a formal evidence quality framework.
Key Distinctions¶
JUDGMENT — The fact-checking community has developed:
- Process standards (IFCN Code) — addressing transparency and fairness
- Data standards (ClaimReview) — addressing interoperability of verdicts
- Shared professional norms — addressing institutional identity
It has NOT developed:
- Evidence quality standards — how to evaluate the strength/reliability of evidence supporting a verdict
- Uncertainty expression standards — how to communicate confidence in verdicts
- Analytical tradecraft standards — how to structure reasoning, identify assumptions, and consider alternatives
The gap is between WHAT fact-checkers do (transparent, reproducible process) and HOW they evaluate the evidence that informs their verdicts (ad hoc, unstated, organization-specific).
Gaps and Limitations¶
- Some sources were behind paywalls and could only be assessed through abstracts and summaries.
- The search was conducted in English only; frameworks in other languages may exist.
- Practitioner guides and internal training materials at specific organizations were not accessible.
- The comparison to GRADE/IPCC/ICD 203 sets a high bar; less formal but still structured approaches may exist below this threshold.