Were inclusion/exclusion criteria applied consistently? Yes. Sources were
included if they directly addressed fact-checking methodology, evidence
evaluation frameworks, or epistemological analysis of fact-checking.
Sources about AI model calibration or unrelated journalism topics were
excluded.
Were any borderline sources excluded that could change the finding?
Potentially. The Seeck et al. (2025) "Epistemological Framework" paper
(SRC-Q1-01) could be interpreted more generously as a nascent prescriptive
framework rather than a purely descriptive one. I classified it as
descriptive based on the available abstract and summary. Full-text review
might reveal more prescriptive elements. However, even if reclassified, a
single 2025 paper does not constitute an established framework comparable to
GRADE (developed over 20+ years with handbook, software tools, and global
adoption).
Were searches sufficient to find disconfirming evidence? Yes. Four
searches (2, 7, 8, 9) were specifically designed as falsification searches
to find a GRADE/IPCC/ICD 203-equivalent in fact-checking. None found one.
Were there obvious search strategies not attempted? Potential gaps:
Non-English language searches (especially for fact-checking frameworks in
non-Western contexts).
Deep search of computational fact-checking conference proceedings (FEVER
workshop, CheckThat! Lab papers) for evidence quality sub-tasks.
Search of fact-checking practitioner organizations' internal documentation
(Africa Check, Full Fact, Chequeado) for unpublished methodology guides.
Assessment: The English-language academic and practitioner literature was
well covered. The non-English and internal-documentation gaps are unlikely to
change the finding given that the IFCN Code (which is international) does not
include evidence quality standards.
Were supporting and contradicting sources evaluated with equal rigor? Yes.
Sources supporting the gap hypothesis (Uscinski & Butler, Steensen et al.)
were evaluated for bias, sample representativeness, and temporal relevance.
Sources potentially contradicting the gap (Seeck et al., Koliska & Roberts,
Amazeen) were given full consideration and their contributions explicitly
noted.
Was the Uscinski & Butler critique given appropriate weight? Potentially
over-weighted given known ideological hostility to fact-checking. Amazeen's
rebuttal was included to balance. The finding does not depend on Uscinski &
Butler's conclusions; multiple independent sources confirm the gap.
Does the synthesis give fair weight to all evidence? Yes. The assessment
explicitly addresses three categories of counter-evidence (Seeck et al.,
Koliska & Roberts, Amazeen) and explains why each does not constitute a
formal evidence evaluation framework.
Is the researcher's declared bias toward structured methodology visible in
the analysis? RISK IDENTIFIED — The comparison table (GRADE/IPCC/ICD 203
vs. fact-checking) frames the analysis in terms of what fact-checking LACKS
relative to these frameworks. This framing inherently favors finding a gap.
An alternative framing might ask whether fact-checking's approach (process
transparency + professional norms) is ADEQUATE for its purpose, which is a
different question than whether it is COMPARABLE to GRADE. The assessment
acknowledges this in the Gaps and Limitations section.
Can every factual claim in the assessment be traced to a specific source?
Yes. All claims are tagged with SRC IDs and linked to URLs.
Were any claims made without source backing? The comparison table cells
marked "No" for fact-checking are supported by the ABSENCE of evidence across
12 searches, not by a single source explicitly stating the absence. This is
an inference from negative evidence, which is appropriate given the breadth of
search but should be noted.
Were URLs verified as accessible? URLs were verified at search time. Some
may be behind paywalls (Tandfonline, SAGE, Springer).