R0049/2026-03-31/Q003-S01 — Search Log
Summary
|
|
| Source/Database |
Web (Google via WebSearch), GitHub |
| Query terms |
"Elicit Consensus ASReview AI systematic review tool structured methodology evidence synthesis platform 2025"; "scite.ai Undermind Storm Stanford AI research tool evidence quality scoring citation verification features"; "PaperQA FutureHouse AI research evidence synthesis automated methodology structured output citations 2025" |
| Filters |
Date: 2025-2026 |
| Results returned |
30 (across three searches) |
| Results selected |
5 |
| Results rejected |
25 |
Selected Results
| Result |
Title |
URL |
Rationale |
| S01-R01 |
PaperQA2 (GitHub) |
https://github.com/Future-House/paper-qa |
Most advanced open-source academic research agent |
| S01-R02 |
STORM (GitHub) |
https://github.com/stanford-oval/storm |
Stanford knowledge curation system with multi-perspective methodology |
| S01-R03 |
Elicit |
https://elicit.com/ |
Leading AI systematic review tool |
| S01-R04 |
Scite.ai |
https://scite.ai/ |
Smart Citations — citation context analysis |
| S01-R05 |
Undermind |
https://www.undermind.ai/ |
Successive search methodology with citation chain following |
Rejected Results
| Result |
Title |
URL |
Rationale |
| — |
Evaluating Elicit as semi-automated reviewer (SAGE) |
https://journals.sagepub.com/doi/10.1177/08944393251404052 |
Evaluation paper, not the tool itself |
| — |
KCL AI tools in evidence synthesis |
https://libguides.kcl.ac.uk/systematicreview/ai |
Library guide, not a tool |
| — |
Comparison of Elicit AI and Traditional Searching (Cochrane) |
https://onlinelibrary.wiley.com/doi/full/10.1002/cesm.70050 |
Evaluation paper |
| — |
Using AI for systematic review: example of Elicit (PMC) |
https://pmc.ncbi.nlm.nih.gov/articles/PMC11921719/ |
Description paper |
| — |
GMU AI Tools for Literature Reviews |
https://infoguides.gmu.edu/GenArtificial-Intelligence/lit_reviews |
Library guide |
| — |
TAMUSA AI Literature Research Tools |
https://libguides.tamusa.edu/AI_for_lit_research/web_tools |
Library guide |
| — |
Multiple tool comparison articles |
Various |
Comparison/review articles, not tool documentation |
| — |
PaperQA2 academic paper (arXiv) |
https://arxiv.org/pdf/2409.13740 |
Academic paper about PaperQA2 — supplementary to GitHub source |
| — |
FutureHouse engineering blog |
https://www.futurehouse.org/research-announcements/ |
Blog post, supplementary |
| — |
WikiCrow Demo |
https://www.futurehouse.org/articles/wikicrow-old |
Application of PaperQA, not a separate tool |
| — |
SciSpace comparison articles |
Various |
Comparison articles |
| — |
ResearchRabbit |
Various |
Literature discovery tool without structured analytical features |
| — |
Remaining results |
— |
Various tool descriptions and comparisons without analytical rigor features |
Notes
This search identified the major AI research platforms. Direct inspection of
GitHub repositories (PaperQA2, STORM) confirmed that none implement the five
target features. Commercial tools (Elicit, scite) have partial features but
not comprehensive frameworks.