R0050/2026-03-31-02/Q002 — Query Definition¶
Query as Received¶
Beyond intelligence analysis and science, which other disciplines have formal truth-seeking methodologies that include structured evidence evaluation? Search across: legal standards of proof, auditing standards (PCAOB, GAAS), epidemiology (Bradford Hill criteria), medical diagnosis (OCEBM, CASP), engineering safety analysis (FMEA, FTA), historical source criticism, and information literacy frameworks (SIFT, CRAAP). For each, identify whether it contributes concepts not already captured by ICD 203, GRADE, PRISMA, Cochrane, IPCC, Chamberlin/Platt, ROBIS, or NAS.
Query as Clarified¶
This is an open-ended survey query with two components:
- Identification: For each named discipline/framework, does it include formal, structured evidence evaluation?
- Novelty assessment: Does each framework contribute concepts not already present in the nine intelligence/scientific frameworks listed (ICD 203, GRADE, PRISMA, Cochrane, IPCC, Chamberlin/Platt, ROBIS, NAS)?
The query assumes knowledge of the nine reference frameworks. Sub-questions: - What does each discipline's evidence evaluation look like structurally? - What unique concepts does it contribute? - How does it compare to the reference set?
Embedded assumption: The query assumes the nine listed frameworks represent a comprehensive baseline. This is treated as a working assumption rather than tested.
BLUF¶
All eight named disciplines have formal truth-seeking methodologies with structured evidence evaluation. Five contribute concepts not already captured by the nine reference frameworks: (1) Legal standards contribute a calibrated hierarchy of proof tied to consequence severity; (2) Auditing contributes the sufficiency-appropriateness dual-axis evaluation and an explicit evidence reliability hierarchy; (3) FMEA/FTA contribute the Risk Priority Number (probability x severity x detection) and fault-tree logic; (4) Historical source criticism contributes external/internal criticism separation and the seven-criteria Scandinavian framework; (5) SIFT contributes lateral reading as a verification strategy. Bradford Hill, OCEBM, and CASP contribute concepts already well-captured by GRADE and Cochrane.
Scope¶
- Domain: Cross-disciplinary evidence evaluation methodologies
- Timeframe: Current published standards
- Testability: Verified by examining published standards, frameworks, and academic literature for each discipline
Assessment Summary¶
Probability: N/A (open-ended survey — no single hypothesis to assign probability)
Confidence: High
Hypothesis outcome: Open-ended query — evidence synthesized into thematic clusters rather than hypothesis testing. See assessment for full analysis.
[Full assessment in assessment.md.]
Status¶
| Field | Value |
|---|---|
| Date created | 2026-03-31 |
| Date completed | 2026-03-31 |
| Researcher profile | Not provided |
| Prompt version | ai-research-methodology 1.1.0 |
| Revisit by | 2027-06-30 |
| Revisit trigger | Publication of new cross-disciplinary evidence evaluation framework, major revision to any examined standard |