R0049/2026-03-31/Q001-H1¶
Statement¶
At least one published prompt implements a full analytical rigor framework for AI research — covering search methodology, evidence scoring, bias assessment, confidence calibration, synthesis, and self-audit in a single coherent system.
Status¶
Partially supported. Individual components exist in isolation (LLM-based ACH implementation, Framework Chain-of-Thought for screening, PRISMA-trAIce reporting checklist), but no single published prompt was found that integrates all components into a unified research methodology system prompt.
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC04-E01 | Roberts implemented ACH, Starbursting, and Key Assumptions Check as LLM-powered tools with published source code |
| SRC02-E01 | Framework Chain-of-Thought prompting directs LLMs to reason against predefined frameworks with 93.6% accuracy |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC03-E01 | Major prompt libraries contain no research-methodology prompts implementing full frameworks |
| SRC05-E01 | Agent Laboratory automates research workflow but lacks analytical rigor mechanisms |
Reasoning¶
The closest implementations found are Roberts' LLM SATs work (individual structured analytic techniques) and the Framework Chain-of-Thought approach (systematic screening against predefined criteria). However, both are single-technique implementations rather than comprehensive frameworks. No published prompt was found that combines ICD 203 probability language, GRADE evidence scoring, ROBIS self-audit, and ACH into a single system prompt.
Relationship to Other Hypotheses¶
Supports H3 (partial implementations exist). Partially contradicts H2 (some implementations do exist, just not comprehensive ones).