Skip to content

S01 — Academic Prompt Frameworks Search Log

Research R0049 — Landscape Scan
Run 2026-03-31-02
Query Q001
Search S01

Tool and Description

Field Value
Tool WebSearch
Query AI LLM system prompt implementing full research methodology framework GRADE PRISMA systematic review 2024 2025
Date 2026-03-31
Results returned 10
Selected 3
Rejected 7

Summary

Searched for academic publications describing complete system prompts implementing research methodology frameworks. Results heavily skewed toward papers about using LLMs within PRISMA workflows (screening, extraction) rather than prompts implementing PRISMA as an analytical framework.

Selected Results

Result Title Rationale
S01-R01 PRISMA-trAIce Checklist Directly addresses AI in systematic review methodology — reporting standard
S01-R02 PRISMA-DFLLM Extension Domain-specific LLM framework combining PRISMA with LLMs
S01-R03 Emergence of LLMs as Tools in Literature Reviews Systematic review of LLM use in review processes

Rejected Results

Result Title Rationale for Rejection
AI Agents in Clinical Medicine (PMC) About AI agents as subjects, not implementing methodology in prompts
LLM-Powered Automated Assessment (MDPI) Focused on educational assessment, not research methodology
LLMs and PRISMA 2020 Adherence (Springer) Uses LLMs to check adherence, not implement framework
Evaluation of AI Tools vs PRISMA (PMC) Compares AI tools against PRISMA, not implementing it
Systematic Review of Multilingual Applications (ERIC) Unrelated to analytical rigor frameworks
Emergence of LLMs (duplicate PMC) Duplicate of selected result
LLMs as Tools in Literature Reviews (Oxford) Duplicate of selected result

Notes

The academic literature treats PRISMA and GRADE as standards to evaluate AI against, not as frameworks to implement within AI system prompts. This is a significant finding for the query.