R0049/2026-03-31/Q001-SRC02 — Scorecard¶
Source¶
| Title | Prompting is all you need: LLMs for systematic review screening (Framework Chain-of-Thought) |
| Publisher | medRxiv (preprint) |
| Authors | Multiple |
| Date | 2024 |
| URL | https://www.medrxiv.org/content/10.1101/2024.06.01.24308323v1.full |
| Type | Preprint |
Ratings¶
| Dimension | Rating |
|---|---|
| Reliability | Medium-High |
| Relevance | High |
| Missing data | Low risk |
| Measurement | Low risk |
| Selective reporting | Some concerns |
| Randomization | N/A |
| Protocol deviation | N/A |
| COI/funding | Low risk |
Rationale¶
| Dimension | Rationale |
|---|---|
| Reliability | Preprint with quantitative evaluation across 10 systematic reviews; not yet peer-reviewed |
| Relevance | Directly implements a prompting approach that encodes systematic review framework reasoning into LLM instructions |
| Missing data | Reports accuracy, sensitivity across multiple review types |
| Measurement | Clear metrics (accuracy, sensitivity) with defined baselines |
| Selective reporting | Preprint status means final reporting choices may change |
| COI/funding | No apparent conflicts noted |
Evidence Extracts¶
| Evidence | Summary |
|---|---|
| SRC02-E01 | Framework CoT directs LLMs to reason against predefined systematic review criteria, achieving 93.6% accuracy — but covers screening phase only |