Q001-H3 — Partial Implementations Exist¶
Statement¶
Partial implementations exist that apply individual structured analytic techniques or research workflow stages via LLMs, but no published prompt achieves comprehensive coverage of a full analytical rigor framework. The gap between narrow-task prompts and full-framework prompts remains unfilled.
Status¶
Supported — This is the best-supported hypothesis based on the evidence.
Supporting Evidence¶
| Evidence ID | Summary | Strength |
|---|---|---|
| SRC02-E01 | sroberts implements 3 of 66 SATs (Starbursting, ACH, Key Assumptions Check) | Strong |
| SRC05-E01 | Agent Laboratory: multi-phase research pipeline in code, not prompt | Strong |
| SRC06-E01 | AI-Researcher: complete research workflow, NeurIPS 2025 | Strong |
| SRC01-E01 | 58 prompting techniques catalogued, none a full research framework | Strong |
| SRC03-E01 | PRISMA-trAIce: reporting standard for AI in reviews, not an operational prompt | Moderate |
| SRC04-E01 | Commercial prompts prioritize citation transparency over analytical rigor | Moderate |
Contradicting Evidence¶
| Evidence ID | Summary | Strength |
|---|---|---|
| — | No direct contradicting evidence | — |
Reasoning¶
The evidence converges on a clear pattern: the field has produced (a) numerous narrow-task prompts for specific review stages (screening, extraction, checklist verification), (b) a handful of partial implementations of structured analytic techniques via LLMs, and (c) comprehensive research agent systems implemented in code rather than as system prompts. The gap is specifically at the intersection of "published as a system prompt" and "implements a full analytical rigor framework." This gap likely exists because comprehensive methodologies require multi-step orchestration that exceeds what a single system prompt can effectively encode — hence the shift to code-based agent architectures.