R0049/2026-03-31/Q001 — Assessment
BLUF
No published AI/LLM system prompt was found that implements a complete
analytical rigor framework for research. Partial implementations exist —
notably Scott Roberts' LLM-based structured analytic techniques and the
Framework Chain-of-Thought prompting approach — but these address individual
techniques or single research phases rather than end-to-end methodology. The
gap between what exists and a comprehensive framework is substantial.
Probability
|
|
| Rating |
Very unlikely (05-20%) that a complete, published framework prompt exists and was not found |
| Confidence |
Medium-High |
| Confidence rationale |
Multiple independent search strategies across academic databases, GitHub, and prompt libraries converged on the same finding. The search covered the most likely publication venues. However, unpublished corporate/government prompts and non-English-language publications represent potential blind spots. |
Reasoning Chain
- The query asks whether a published prompt exists that implements a full
analytical rigor framework for AI research.
- Three independent search strategies were executed: academic literature
(S01), GitHub
repositories (S02),
and practitioner publications
(S03).
- Academic literature revealed framework extensions (PRISMA-trAIce
SRC01-E01, L-PRISMA) and
methodology enhancements (Framework CoT
SRC02-E01) — but these are
reporting checklists and single-phase prompts, not complete system prompts.
- GitHub repositories revealed the largest prompt libraries
(SRC03-E01) lack
research methodology prompts entirely, and research automation tools
(SRC05-E01) lack
analytical rigor mechanisms.
- Practitioner publications revealed Roberts' SAT implementations
(SRC04-E01) — the
closest match — but these are three separate tools, not a unified framework.
- The pattern across all sources is consistent: the field has invested in
automating research tasks but not in encoding analytical discipline.
Evidence Base Summary
| Source |
Reliability |
Relevance |
Key finding |
| SRC01 |
High |
Medium |
Reporting checklist, not operational prompt |
| SRC02 |
Medium-High |
High |
Framework-guided prompting works (93.6% accuracy) but covers screening only |
| SRC03 |
Medium |
Medium |
Largest prompt libraries lack research methodology content |
| SRC04 |
Medium |
High |
Individual SATs implemented as LLM tools, not unified framework |
| SRC05 |
Medium-High |
Medium |
Research automation without analytical rigor |
Collection Synthesis
| Dimension |
Assessment |
| Evidence quality |
Mix of peer-reviewed (SRC01), preprint (SRC02), and practitioner (SRC04) sources — adequate for landscape characterization |
| Source agreement |
High agreement across all sources: partial implementations exist, comprehensive frameworks do not |
| Independence |
Sources are independent — different authors, venues, and approaches converge on same finding |
| Outliers |
None — all evidence points in the same direction |
Detail
The evidence collection reveals a consistent landscape: the AI research
community has produced (a) reporting frameworks for AI use in research
(PRISMA-trAIce), (b) single-technique prompt implementations (Framework CoT,
Roberts' SATs), and (c) automated research workflow platforms (Agent Laboratory,
STORM, PaperQA2). None of these combine into a comprehensive analytical rigor
system prompt. The gap exists at the integration layer — combining multiple
techniques into a unified methodology.
Gaps
| Gap |
Impact on confidence |
| Unpublished corporate/government prompts not searchable |
Could reduce confidence if classified or proprietary frameworks exist |
| Non-English language publications not searched |
Minor — framework development appears concentrated in English-language venues |
| Preprint status of key source (SRC02) |
Minor — findings may change but core landscape characterization is stable |
| Internal AI company research (Anthropic, OpenAI, Google) |
Could reduce confidence — these organizations may have internal methodology prompts |
Researcher Bias Check
The researcher is building a unified research methodology system, which creates
an incentive to find that no prior art exists. This bias was mitigated by:
(a) designing searches specifically to find confirming evidence for H1,
(b) evaluating partial implementations generously, and (c) documenting Roberts'
SAT work as genuine prior art rather than dismissing it.
Cross-References
- Q002 — Related:
whether IC and scientific frameworks have been combined at all (for any use)
- Q003 — Related:
whether tools (not prompts) implement structured analytical frameworks