Skip to content

R0049/2026-03-31/Q001-S03 — Search Log

Research R0049 — Landscape Scan
Run 2026-03-31
Query Q001
Search S03

Summary

Source/Database Web (Google via WebSearch)
Query terms "'prompt library' OR 'system prompt' research analysis 'GRADE' OR 'systematic review' OR 'evidence quality' LLM AI agent complete published"; "'Heuer' 'structured analytic techniques' AI LLM implementation prompt automated intelligence analysis 2024 2025"
Filters Date: 2024-2025
Results returned 20 (across two searches)
Results selected 2
Results rejected 18

Selected Results

Result Title URL Rationale
S03-R01 Framework Chain-of-Thought Prompting (medRxiv) https://www.medrxiv.org/content/10.1101/2024.06.01.24308323v1.full Novel prompting approach directing LLMs to reason against predefined frameworks
S03-R02 LLM SATs FTW (sroberts.io) https://sroberts.io/posts/llm-sats-ftw/ Published implementation of structured analytic techniques (ACH, Starbursting, Key Assumptions Check) using LLMs with source code

Rejected Results

Result Title URL Rationale
LLM to screen titles in living systematic review https://pmc.ncbi.nlm.nih.gov/articles/PMC12306261/ Screening-only prompts, narrow task
Harnessing LLMs for Data Extraction in SRs https://pmc.ncbi.nlm.nih.gov/articles/PMC12559671/ Data extraction prompts only
LitLLM: A Toolkit for Scientific Literature Review https://arxiv.org/html/2402.01788v1 Toolkit, not a methodology prompt
System for SLR Using Multiple AI agents https://arxiv.org/html/2403.08399v2 Multi-agent workflow, not a methodology framework prompt
Prompting Guide papers https://www.promptingguide.ai/papers General prompt engineering resources
FIU Literature Reviews with Prompts https://library.fiu.edu/ai/lit-review Academic library guide, not a published framework
TAMU How to Craft Prompts https://tamu.libguides.com/ Academic library guide
LLM Agents Prompting Guide https://www.promptingguide.ai/research/llm-agents General agent guide
Heuer/Pherson book references (multiple) Various Amazon/Google Books URLs Book references, not prompt implementations
Remaining Heuer search results Book reviews, notes, and summaries — not AI implementations

Notes

The Framework Chain-of-Thought paper and Roberts' LLM SATs blog post represent the two most substantive findings for Q001. Framework CoT implements reasoning against predefined frameworks (achieving 93.6% accuracy in screening), while Roberts implements three individual SATs as LLM-powered web applications. Both represent partial implementations rather than comprehensive frameworks.