Q001-H1 — Complete Published Prompts Exist¶
Statement¶
Yes, complete and usable AI/LLM system prompts have been published that implement a full analytical rigor framework for research, going beyond narrow tasks to encompass comprehensive research methodology including frameworks such as ICD 203, GRADE, PRISMA, Cochrane, or IPCC.
Status¶
Not supported — No evidence found of a published prompt meeting the full criteria.
Supporting Evidence¶
| Evidence ID | Summary | Strength |
|---|---|---|
| — | No direct supporting evidence found | — |
Contradicting Evidence¶
| Evidence ID | Summary | Strength |
|---|---|---|
| SRC01-E01 | The Prompt Report's taxonomy of 58 techniques found no comprehensive research methodology prompt | Strong |
| SRC04-E01 | Perplexity's leaked prompt contains zero analytical framework components | Strong |
| SRC02-E01 | sroberts implementation covers only 3 of 66 SATs, none as a system prompt | Moderate |
Reasoning¶
Despite extensive search across academic literature, GitHub repositories, prompt libraries, leaked commercial prompts, and practitioner publications, no published system prompt was found that implements a complete analytical rigor framework. The closest candidates either implement a small subset of techniques (sroberts: 3 SATs) or implement research pipelines in code rather than as prompts (Agent Laboratory, AI-Researcher). Commercial systems like Perplexity and OpenAI Deep Research have minimal analytical methodology in their system prompts, delegating rigor to training rather than prompt architecture.