R0050/2026-03-31-02/Q001 — Self-Audit¶
ROBIS 4-Domain Audit¶
Domain 1: Eligibility Criteria¶
Rating: Low risk
| Criterion | Assessment |
|---|---|
| Were evidence types defined before searching? | Yes — four specific structural features defined by the query |
| Did criteria shift after seeing results? | No — the four features remained constant throughout |
| Were exclusion criteria applied consistently? | Yes — same standard applied to all seven frameworks |
Notes: The query provided unusually clear eligibility criteria (four named structural features), reducing the risk of post-hoc criterion adjustment.
Domain 2: Search Comprehensiveness¶
Rating: Low risk
| Criterion | Assessment |
|---|---|
| Multiple search strategies used | Yes — three search rounds covering all seven named frameworks plus academic literature |
| Searches designed to test each hypothesis | Yes — S03 specifically targeted evidence that could support H3 or contradict H2 |
| All results dispositioned | Yes — 80 results returned, 12 selected, 68 rejected with rationale |
| Source diversity achieved | Yes — primary methodology documents, professional standards, investigation handbooks, and academic research |
Notes: All seven named frameworks were examined through their primary methodology documents. The academic search (S03) added independent validation. WebFetch was used for detailed content extraction from NewsGuard and PolitiFact.
Domain 3: Evaluation Consistency¶
Rating: Low risk
| Criterion | Assessment |
|---|---|
| All sources scored using same framework | Yes — identical scorecard format applied to all eight sources |
| Evidence typed consistently | Yes — Factual vs. Analytical distinction applied consistently |
| ACH matrix applied | Yes — all evidence evaluated against all three hypotheses |
| Diagnosticity analysis performed | Yes — most and least diagnostic evidence identified |
Notes: Consistent application was aided by the query's clear four-feature framework.
Domain 4: Synthesis Fairness¶
Rating: Low risk
| Criterion | Assessment |
|---|---|
| All hypotheses given fair hearing | Yes — searched specifically for evidence supporting H1 (all features present) and H3 (no features) |
| Contradictory evidence surfaced | Yes — Verification Handbook's deliberate rejection of standardization was surfaced as important context |
| Confidence calibrated to evidence | Yes — high confidence supported by 8 sources with high agreement |
| Gaps acknowledged | Yes — four gaps identified with impact assessment |
Notes: The most important fairness consideration was surfacing the Verification Handbook's explicit rejection of standardization — this provides context that the absence of formal features may be partly a deliberate professional choice, not purely a gap.
Domain 5: Source-Back Verification¶
Rating: Low risk
| Source | Claim in Assessment | Source Actually Says | Match? |
|---|---|---|---|
| SRC01 | IFCN requires "same high standards of evidence" without defining them | Principle 1 uses exactly this phrase | Yes |
| SRC02 | Truth-O-Meter has six ratings from True to Pants on Fire | Methodology page lists exactly six ratings | Yes |
| SRC03 | Nine criteria, 100-point scale, five tiers | Rating page confirms 9 criteria, pass-fail totaling 100 pts, 5 tier categories | Yes |
| SRC05 | Requires cross-referencing with "at least 3 sources" | Yemen methodology page states this requirement | Yes |
| SRC07 | Explicitly rejects "one-size-fits-all" approach | Handbook uses this exact phrase | Yes |
Discrepancies found: 0
Corrections applied: None needed
Unresolved flags: None
Notes: Source-back verification confirmed accurate representation of all key claims. The direct use of primary methodology documents (rather than secondary summaries) reduced the risk of mischaracterization.
Overall Assessment¶
Overall risk of bias: Low risk
The research process was well-constrained by the query's four specific structural features, which provided clear eligibility criteria. All seven named frameworks were examined through primary sources. The finding (partial presence of individual elements, absence of all four features in any single framework) is well-supported by convergent evidence from independent sources.
Researcher Bias Check¶
- Confirmation bias risk: Low. The query framing could bias toward finding the four features absent (confirming the researcher's apparent hypothesis). Mitigated by actively searching for any framework that might contain these features, and by noting NewsGuard's partial match.
- Anchoring bias risk: Low. The intelligence/scientific framework comparison embedded in the query could anchor expectations. Mitigated by evaluating each journalism framework on its own terms before comparing.
- Availability bias risk: Low. The seven named frameworks are well-known, reducing risk of overlooking major alternatives. The gap analysis identifies potentially missing frameworks (Washington Post Fact Checker, Full Fact).