Were inclusion/exclusion criteria applied consistently? Yes. Papers were
included if they explicitly or implicitly addressed the absence, limitations,
or challenges of fact-checking methodology/epistemology. Papers about
technical fact-checking systems, audience effects, or unrelated topics were
excluded.
Were any borderline sources excluded that could change the finding? The
"What Is the Problem with Misinformation?" paper (SRC-Q3-05) was included as
counter-evidence. If more papers framing fact-checking as sociotechnical
practice (rather than epistemological endeavor) had been found, they might
weaken the gap thesis further. This area was not deeply searched.
Were searches sufficient to find disconfirming evidence? Partially. The
sociotechnical counter-argument (SRC-Q3-05) was found organically. A
dedicated search for papers arguing that fact-checking's methodology is
ADEQUATE or that formal frameworks are UNNECESSARY was not conducted.
Were there obvious search strategies not attempted?
A search for "fact-checking methodology adequate" or "fact-checking
epistemology sufficient" would test for papers that explicitly defend the
status quo as appropriate.
Searches in communication studies databases (e.g., Communication Abstracts)
might find additional relevant papers.
Searches for specific fact-checking organization methodology publications
(e.g., Full Fact's methodology documentation, Africa Check's approach)
might reveal more codified frameworks at the organizational level.
Assessment: The search was thorough for gap-identifying papers but less
thorough for gap-denying papers. This asymmetry favors the researcher's
hypothesis and is noted as a limitation.
Were supporting and contradicting sources evaluated with equal rigor? Yes.
Counter-evidence (SRC-Q3-05 sociotechnical framing, SRC-Q3-07 Amazeen's
defense, SRC-Q3-11 practitioner confidence) was given explicit treatment in
the assessment. The counter-evidence section is clearly demarcated.
Was the Uscinski & Butler paper given appropriate weight? RISK —
Uscinski & Butler is given prominence as the "earliest" gap identification,
but its hostile framing and challenged methodology (per Amazeen) mean it
should not be treated as dispositive. The assessment uses it as one data
point among many rather than as the foundation of the finding.
Does the synthesis give fair weight to all evidence? Mostly yes. The
four-category organization (philosophy, journalism studies, computational
research, emerging gap-filling) provides balanced coverage. The counter-
evidence section is substantive.
Is the researcher's declared bias visible in the analysis? RISK
IDENTIFIED — The assessment characterizes papers as "implicitly documenting
the gap" when they document variation or inconsistency without explicitly
calling it a gap. This interpretive move could be seen as reading the
researcher's thesis into neutral findings. The assessment mitigates this by
distinguishing between EXPLICIT and IMPLICIT identification and counting
them separately.
Is the "dispersed documentation" finding itself biased? RISK — Calling
the documentation "dispersed" frames it as a unified phenomenon seen from
different angles, which presupposes that a single gap exists. An alternative
framing is that different papers address different issues (epistemological
legitimacy, reproducibility, standardization, inter-rater reliability) that
only APPEAR to be a single gap when viewed through the researcher's lens.
Can every factual claim in the assessment be traced to a specific source?
Yes. All claims reference SRC IDs with URLs.
Were any claims made without source backing? The claim that papers from
different disciplines "do not cross-reference each other comprehensively" is
an inference from the search results rather than a verified fact — it would
require full-text review of all cited papers to confirm cross-citation
patterns.
Were URLs verified as accessible? URLs were verified at search time.
Several Tandfonline and SAGE articles may be behind paywalls.