R0023/2026-03-25/Q001/SRC01/E01¶
Taxonomy of 58 prompt engineering techniques from systematic PRISMA review of 1,565 papers.
URL: https://arxiv.org/abs/2406.06608
Extract¶
The Prompt Report identifies 58 text-based prompting techniques categorized into six major groups: Zero-Shot, Few-Shot, Thought Generation, Ensembling, Self-Criticism, and Decomposition. The survey was conducted using the PRISMA process, accumulating and filtering a dataset of 1,565 papers from arXiv, Semantic Scholar, and ACL Anthology. The taxonomy provides the definitive catalog of prompt engineering techniques as of 2025, establishing the landscape against which individual techniques can be evaluated for effectiveness or counterproductive effects.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports | Establishes the breadth of techniques that exist, many of which subsequent studies show are context-dependent or counterproductive in specific applications |
| H2 | Supports | The survey itself documents best practices, suggesting the techniques generally work — supporting the view that failures are edge cases |
| H3 | Supports | The categorization into 6 groups with 58 techniques implies significant complexity and context-dependence |
Context¶
This is a survey, not an experimental study. It catalogs techniques and their documented applications but does not conduct original experiments to determine which techniques are counterproductive. Its value for Q001 is as the foundational reference that maps the landscape.
Notes¶
The 31-author team spans major AI labs and universities, making this the most authoritative catalog of prompt engineering techniques available. However, its scope means it provides breadth rather than depth on any single technique's failure modes.