R0049/2026-03-31/Q001
Query
Has anyone published a complete, usable AI/LLM system prompt that implements a
full analytical rigor framework for research?
BLUF
No published system prompt implements a complete analytical rigor framework for
AI research. Partial implementations exist — individual structured analytic
techniques as LLM tools and framework-guided screening prompts — but none
achieve comprehensive coverage combining evidence scoring, bias assessment,
confidence calibration, competing hypotheses, and self-audit.
Answer + Confidence
Very unlikely (05-20%) that a complete published framework prompt exists
undiscovered. Confidence: Medium-High.
Summary
Hypotheses
| ID |
Statement |
Status |
| H1 |
A full framework prompt exists |
Partially supported |
| H2 |
Nothing exists at all |
Eliminated |
| H3 |
Partial implementations only |
Supported |
Searches
| Search |
Type |
Outcome |
| S01 |
Academic literature |
3 selected / 17 rejected — reporting checklists and methodology extensions, no complete prompts |
| S02 |
GitHub repositories |
3 selected / 17 rejected — prompt libraries lack methodology content; research tools lack rigor |
| S03 |
Practitioner publications |
2 selected / 18 rejected — individual SAT implementations and framework-guided prompting |
Sources
Revisit Triggers
- Publication of a comprehensive research methodology system prompt on GitHub
or in academic literature
- Release of AI research agent tools that explicitly implement ICD 203, GRADE,
or ROBIS frameworks
- Framework Chain-of-Thought paper publication in peer-reviewed venue (currently
preprint)
- Roberts or similar practitioners extending individual SAT tools into unified
frameworks