Skip to content

R0050/2026-03-31-02/Q002 — Assessment

BLUF

All eight named disciplines have formal truth-seeking methodologies with structured evidence evaluation. Five contribute concepts not already captured by the nine reference frameworks: law (consequence-calibrated proof thresholds), auditing (sufficiency-appropriateness dual-axis with evidence reliability hierarchy), engineering safety/FMEA (three-factor Risk Priority Number with detection dimension), historical source criticism (external/internal criticism separation, proximity hierarchy), and information literacy/SIFT (lateral reading). Three disciplines — Bradford Hill, OCEBM, and CASP — contribute refinements already substantially captured by GRADE and Cochrane.

Probability

Rating: N/A — open-ended survey query

Confidence in assessment: High

Confidence rationale: Nine primary/secondary sources examined across eight disciplines. All sources are authoritative (regulatory standards, established academic frameworks, or widely-used professional tools). The novelty assessments compare specific structural features against the reference set, reducing subjectivity.

Reasoning Chain

  1. Legal standards of proof form a six-tier calibrated hierarchy with empirically quantified probability ranges (42%-90%). The principle of consequence-calibrated proof thresholds — higher-stakes decisions require higher evidence thresholds — is not formalized in any reference framework. [SRC01-E01, High reliability, High relevance]

  2. PCAOB AS 1105 defines a dual-axis evaluation (sufficiency = quantity, appropriateness = quality) with an explicit evidence reliability hierarchy by source type (external > internal with controls > direct observation > copies). The quality-quantity tradeoff principle and the source-type reliability ranking are novel. [SRC02-E01, High reliability, High relevance]

  3. Bradford Hill's nine viewpoints for causal inference are largely captured by GRADE, which explicitly builds on them. The "analogy" viewpoint is a partial exception but is a weak heuristic rather than a formal tool. [SRC03-E01, Medium-High reliability, High relevance]

  4. FMEA's Risk Priority Number (Probability x Severity x Detection) introduces the detection dimension — assessing how likely an error would be caught — which is absent from all reference frameworks. [SRC04-E01, Medium-High reliability, High relevance]

  5. FTA's Boolean logic decomposition of causal chains (AND/OR gates) provides a structural approach to causation not found in the reference set, complementing FMEA's bottom-up approach with top-down analysis. [SRC05-E01, Medium-High reliability, Medium-High relevance]

  6. Historical source criticism contributes the external/internal criticism distinction, a proximity hierarchy, and formal tendency (bias direction) assessment. These predate all reference frameworks and add concepts not captured by them. [SRC06-E01, Medium reliability, High relevance]

  7. SIFT's lateral reading strategy — immediately leaving a source to check external coverage rather than evaluating it in isolation — is procedurally novel. CRAAP's five dimensions are captured by GRADE and Cochrane. [SRC07-E01, Medium reliability, Medium relevance]

  8. OCEBM's five-level evidence hierarchy is closely related to GRADE, with question-type specificity as its primary refinement. [SRC08-E01, High reliability, Medium relevance]

  9. CASP checklists operationalize Cochrane/ROBIS concepts in checklist format without adding novel concepts. [SRC09-E01, High reliability, Medium relevance]

  10. JUDGMENT: The five disciplines with novel contributions cluster into three categories: (a) threshold calibration (law, auditing), (b) failure/risk analysis (FMEA, FTA), and (c) source authentication (historical criticism, SIFT). Each category addresses an aspect of evidence evaluation not covered by the intelligence/scientific reference set. [JUDGMENT]

Evidence Base Summary

Source Description Reliability Relevance Key Finding
SRC01 Legal standards of proof High High Consequence-calibrated proof thresholds (novel)
SRC02 PCAOB AS 1105 High High Sufficiency-appropriateness dual axis with reliability hierarchy (novel)
SRC03 Bradford Hill criteria Medium-High High Nine viewpoints — captured by GRADE
SRC04 FMEA methodology Medium-High High RPN with detection dimension (novel)
SRC05 FTA methodology Medium-High Medium-High Boolean logic for causal decomposition (novel)
SRC06 Historical source criticism Medium High External/internal criticism, proximity hierarchy (novel)
SRC07 SIFT / CRAAP Medium Medium Lateral reading (novel); CRAAP captured by GRADE
SRC08 OCEBM levels High Medium Question-type specificity — refinement of GRADE
SRC09 CASP checklists High Medium Checklist format — captured by Cochrane/ROBIS

Collection Synthesis

Dimension Assessment
Evidence quality Robust — primary standards, established frameworks, and academic sources
Source agreement High — all disciplines clearly have formal methodologies
Source independence High — each discipline developed independently
Outliers None — all disciplines confirmed as having formal methods

Detail

The eight disciplines examined span law, accounting, epidemiology, medicine, engineering, history, and information science. All have developed formal, structured approaches to evidence evaluation, confirming that truth-seeking methodology is not limited to intelligence and scientific research. The novel contributions cluster around three themes that the reference set does not address: (1) how evidence thresholds should scale with decision consequences, (2) how to assess detectability of errors/failures, and (3) how to authenticate sources before evaluating their content.

Gaps

Missing Evidence Impact on Assessment
Forensic science evidence standards Low — likely overlaps with legal standards
Actuarial science probability frameworks Medium — may contribute additional calibrated probability approaches
Archival science appraisal standards Low — likely overlaps with historical source criticism

Researcher Bias Check

Declared biases: No researcher profile provided.

Influence assessment: The query structure may bias toward finding novel contributions by focusing attention on differences rather than commonalities. Mitigated by explicitly identifying where frameworks do NOT contribute novelty (Bradford Hill, OCEBM, CASP).

Cross-References

Entity ID File
Hypotheses H1, H2, H3 hypotheses/
Sources SRC01, SRC02, SRC03, SRC04, SRC05, SRC06, SRC07, SRC08, SRC09 sources/
ACH Matrix ach-matrix.md
Self-Audit self-audit.md