Q001 — Search Log¶
Query¶
Has the fact-checking community developed any formal epistemological framework for evidence evaluation comparable to GRADE (certainty of evidence), IPCC (calibrated uncertainty language), or ICD 203 (analytical tradecraft standards)?
Search Executions¶
Search 1¶
- Terms:
fact-checking epistemological framework evidence evaluation methodology academic literature - Rationale: Broad sweep for any formal epistemological framework in fact-checking academic literature.
- Results returned: 10
- Selected:
- SRC-Q1-01: Seeck et al. (2025), "Fact-Checking in Journalism: An Epistemological Framework" — Tandfonline — Directly proposes an epistemological framework for fact-checking.
- SRC-Q1-02: Uscinski & Butler (2013), "The Epistemology of Fact Checking" — PhilPapers — Foundational critique of fact-checking epistemology.
- SRC-Q1-03: Steensen, Kalsnes & Westlund (2024), "The limits of live fact-checking: Epistemological consequences" — SAGE — Examines epistemological consequences of live fact-checking.
- SRC-Q1-04: Grut (2026), "Convergent Epistemic Practices in Visual Fact-Checking" — Tandfonline — Examines transfer of external epistemic practices into journalism.
- SRC-Q1-05: Lee et al. (2023), "'Fact-checking' fact checkers: A data-driven approach" — Harvard Misinformation Review — Empirical comparison of fact-checker methodology and agreement.
- Rejected:
- Trust but verify? (Oxford Academic) — About fictional entertainment, not fact-checking evidence evaluation.
- "The epistemic status of reproducibility" (philsci-archive) — Picked up in Search 5 instead.
Search 2¶
- Terms:
fact-checking formal evidence grading system comparable GRADE IPCC certainty of evidence - Rationale: Directly search for GRADE-comparable systems in fact-checking. Designed to find confirming evidence for H1 (that such a system exists).
- Results returned: 10
- Selected: None specific to fact-checking — all results were about GRADE and IPCC themselves in their home domains (medicine, climate science).
- Rejected: All 10 results — they describe GRADE and IPCC in their own domains (CDC, IPCC, Springer, PubMed, ScienceDirect) but none describe an equivalent system adopted by fact-checking.
- Finding: JUDGMENT — The absence of any fact-checking-specific results for this targeted query is itself informative. No GRADE-comparable evidence grading system appears to exist in fact-checking.
Search 3¶
- Terms:
computational fact-checking structured evidence assessment pipeline NLP claim verification methodology - Rationale: Search computational/automated fact-checking for formal evidence evaluation frameworks that might exist in the technical domain.
- Results returned: 10
- Selected:
- SRC-Q1-06: Guo, Schlichtkrull & Vlachos (2022), "A Survey on Automated Fact-Checking" — TACL/MIT Press — Comprehensive survey of automated fact-checking pipeline.
- SRC-Q1-07: CLEF-2025 CheckThat! Lab overview — Springer — Latest shared task on fact-checking verification pipeline.
- Rejected:
- GitHub Cartus repository — resource list, not primary research.
- Step-by-Step Fact Verification for Medical Claims — domain-specific medical claims, not general fact-checking framework.
- ClaimCheck, FIRE, "The Data Says Otherwise" — individual system papers, not frameworks for evidence quality evaluation.
Search 4¶
- Terms:
IFCN code of principles fact-checking standards evidence quality assessment structured methodology - Rationale: Search for the primary practitioner standard (IFCN) to assess whether it constitutes a formal evidence evaluation framework.
- Results returned: 10
- Selected:
- SRC-Q1-08: IFCN Code of Principles — Poynter — The primary practitioner standard for fact-checking.
- Rejected:
- FactCheckNI, Factcheck.ge, accountablejournalism.org — restatements of the same IFCN code on individual org sites.
- RAND listing — catalog entry, not analysis.
- Compliance study (Spain) — about compliance with IFCN, not about evidence evaluation.
Search 5¶
- Terms:
Uscinski Butler "epistemology of fact checking" 2013 journal arguments - Rationale: Deep dive into the foundational epistemology critique to assess whether it identifies the framework gap.
- Results returned: 10
- Selected:
- SRC-Q1-09: Amazeen (2015), "Revisiting the Epistemology of Fact-Checking" — Tandfonline — Rebuttal to Uscinski & Butler.
- Rejected:
- Uscinski rejoinder (2015) — already captured through SRC-Q1-02 exchange.
- Ballotpedia listing — aggregator, not primary source.
- SCIRP reference, academia.edu, rydenbutler.org — mirrors of already captured content.
Search 6¶
- Terms:
ClaimReview schema.org fact-checking structured data evidence quality rating scale - Rationale: Assess whether ClaimReview schema includes evidence quality rating comparable to GRADE.
- Results returned: 10
- Selected:
- SRC-Q1-10: Schema.org ClaimReview type — Schema.org — The structured data standard for fact-checks.
- SRC-Q1-11: Google Fact Check markup documentation — Google — Implementation guide showing what ClaimReview captures.
- Rejected:
- HillWebCreations, Rankmath, Schemantra — SEO implementation guides, not analytical.
- DataCommons FAQ/Download — data access, not methodology.
Search 7¶
- Terms:
ICD 203 analytic tradecraft standards intelligence community structured analysis fact-checking comparison - Rationale: Establish the ICD 203 benchmark and search for any fact-checking adoption or comparison.
- Results returned: 10
- Selected:
- SRC-Q1-12: ODNI ICD-203 document — DNI — The reference standard for comparison.
- Rejected:
- LinkedIn article, JCSM paper, Army University Press, NIU paper, DoD IG report — all about ICD 203 in intelligence community context, none compare to fact-checking.
- Finding: JUDGMENT — No search result connects ICD 203 tradecraft standards to journalistic fact-checking, reinforcing that fact-checking has not adopted intelligence community evidence evaluation standards.
Search 8¶
- Terms:
fact-checking "calibrated language" OR "calibrated confidence" OR "calibrated uncertainty" methodology standardization - Rationale: Search for any adoption of calibrated uncertainty language (IPCC-style) in fact-checking.
- Results returned: 10
- Selected:
- SRC-Q1-13: Vladika & Matthes (2023), "Scientific Fact-Checking: A Survey of Resources and Approaches" — arXiv/ACL — Survey noting that no datasets account for "differing levels and strength of evidence."
- Rejected:
- LLM calibration papers (Nature, arxiv 2509, 2601, 2505, 2502) — about AI model calibration, not human fact-checker methodology.
- "Perils and promises" (PMC) — about LLM use in fact-checking, not human methodology.
- Finding: JUDGMENT — Calibrated uncertainty language has not been adopted in fact-checking. The results that appeared were all about calibrating AI models, not human fact-checker epistemology.
Search 9¶
- Terms:
fact-checking formal structured evidence grading scale similar to medical GRADE system journalism - Rationale: Final falsification search — explicitly looking for a GRADE-equivalent in journalism.
- Results returned: 10
- Selected: None new (Seeck et al. already captured).
- Rejected: All GRADE results were about GRADE in healthcare. The one fact-checking result (Cross-checking journalistic fact-checkers, PLOS ONE) was about inter-rater agreement, not a structured evidence grading system.
- Finding: JUDGMENT — This targeted search confirms no GRADE-equivalent exists in fact-checking.
Search 10¶
- Terms:
fact-checking rating scale inconsistency PolitiFact comparison disagreement between fact checkers methodology - Rationale: Look for evidence that fact-checker rating inconsistency has prompted development of standardized evidence frameworks.
- Results returned: 10
- Selected:
- SRC-Q1-14: Lim (2018), "Checking how fact-checkers check" — SAGE — Demonstrates inter-rater disagreement.
- SRC-Q1-15: Cross-checking journalistic fact-checkers (2023), PLOS ONE — PMC — Quantifies rating scale disagreement.
- Rejected:
- Penn State press releases — summaries of the Lee et al. paper already captured.
- Factually.co — informal comparison site, not academic.
- ResearchGate listing for Fact-Checking Polarized Politics — could not access full text.
Search 11¶
- Terms:
"Epistemology of Fact Checking" examination practices beliefs fact checkers world Digital Journalism 2024 - Rationale: Deep dive into practitioner epistemology as expressed by fact-checkers themselves.
- Results returned: 10
- Selected:
- SRC-Q1-16: Koliska & Roberts (2024), "Epistemology of Fact Checking: An Examination of Practices and Beliefs" — Tandfonline — Survey of 40 fact-checking organizations across 50+ countries.
- Rejected:
- IAPMR research round-up — summary of others' work.
- Other results already captured in previous searches.
Search 12¶
- Terms:
Duke Reporters Lab fact-checking census global database rating scales methodology comparison - Rationale: Check the largest fact-checking database for standardized evidence methodology.
- Results returned: 10
- Selected:
- SRC-Q1-17: Duke Reporters' Lab Fact-Checking Census — Reporters' Lab — Global database of 417+ fact-checkers showing diversity of rating scales.
- Rejected:
- Ballotpedia, Sanford School press releases — secondary sources.
- Poynter 2024 article — reporting on census, not methodology.
Search Completeness Assessment¶
- Total unique searches executed: 12
- Total results evaluated: ~120
- Sources selected: 17
- Sources rejected with rationale: ~103
- Falsification searches conducted: Searches 2, 7, 8, 9 were specifically designed to find evidence that a formal framework exists (H1). All returned negative results for the fact-checking domain.