Skip to content

R0049/2026-03-31/Q001-S01 — Search Log

Research R0049 — Landscape Scan
Run 2026-03-31
Query Q001
Search S01

Summary

Source/Database Web (Google via WebSearch)
Query terms "AI system prompt implementing full research methodology framework GRADE PRISMA systematic review 2024 2025"; "published LLM system prompt analytical rigor framework research methodology complete prompt text 2024 2025"
Filters Date: 2024-2025
Results returned 20 (across two searches)
Results selected 3
Results rejected 17

Selected Results

Result Title URL Rationale
S01-R01 PRISMA-trAIce Checklist https://pmc.ncbi.nlm.nih.gov/articles/PMC12694947/ Directly addresses AI transparency in systematic reviews with a structured framework
S01-R02 L-PRISMA: Extension in the Era of GenAI https://arxiv.org/html/2603.19236 Proposes integration of human-led synthesis with AI-assisted statistical pre-screening
S01-R03 AI-Powered Prompt Engineering for Research https://www.preprints.org/frontend/manuscript/54d94166a55c5769d755bcf08fc72b49/download_pub Preprint on prompt engineering frameworks for research tasks

Rejected Results

Result Title URL Rationale
AI in Students' Mathematics Learning: PRISMA-Based Review https://www.ijlter.org/index.php/ijlter/article/view/16042 Uses PRISMA as review method, does not implement it as AI prompt framework
PRISMA AI reporting guidelines for systematic reviews https://www.researchgate.net/publication/367180953 Reporting guidelines, not a system prompt implementation
AI tools in evidence synthesis (KCL LibGuide) https://libguides.kcl.ac.uk/systematicreview/ai Tool catalog, not a prompt framework
PRISMA Systematic Review on AI-Enhanced VR https://dl.acm.org/doi/10.1145/3757749.3757824 Subject-matter review using PRISMA, not about implementing PRISMA in AI
Evaluation of AI Tools vs PRISMA Method https://ai.jmir.org/2025/1/e68592 Compares AI tools against PRISMA, does not implement PRISMA as prompt
Large Language Models for RE https://arxiv.org/pdf/2509.11446 Requirements engineering focus, not research methodology
Prompt Engineering Framework for Mental Health https://mental.jmir.org/2025/1/e75078/PDF MIND-SAFE framework — mental health dialogues, not research methodology
Prompting LLMs for quality ecological statistics https://ecoevorxiv.org/repository/object/9493/download/17692/ Statistical rigor guidelines, not a complete research methodology prompt
LLM Evaluation in 2025 https://www.techrxiv.org/users/927947/articles/1304989 Model evaluation metrics, not research methodology
Systematic Survey of Prompt Engineering https://arxiv.org/abs/2402.07927 General survey, not a specific framework implementation
World Bank Guidance Note on AI https://documents1.worldbank.org/curated/en/099136005132515321/pdf/ Policy guidance, not a prompt framework
K2-V2 Technical Report https://www.llm360.ai/reports/K2_V2_report.pdf Model architecture report, not research methodology
LLM to screen titles in living systematic review https://pmc.ncbi.nlm.nih.gov/articles/PMC12306261/ Screening-only, narrow task
Enhancing Software Requirements Engineering https://aclanthology.org/2025.acl-srw.31.pdf Requirements engineering, not research methodology
LLMs for Structured and Semi-Structured Data https://www.mdpi.com/2079-9292/14/15/3153 Data processing survey, not research methodology
A Systematic Survey of Prompt Engineering (duplicate concept) Covered by other results
Remaining results from second search Various LLM evaluation and prompt engineering papers not specific to research methodology frameworks

Notes

The academic literature contains extensive work on using AI within systematic review processes (screening, data extraction) but a notable absence of published system prompts implementing complete research methodology frameworks. The closest results are reporting checklists (PRISMA-trAIce) and methodology extensions (L-PRISMA) rather than executable prompt artifacts.