Skip to content

R0023/2026-03-25/Q001/SRC01

The Prompt Report — most comprehensive systematic survey of prompt engineering techniques

Source

Field Value
Title The Prompt Report: A Systematic Survey of Prompting Techniques
Publisher arXiv (preprint)
Author(s) Sander Schulhoff et al. (31 authors from OpenAI, Microsoft, Google, Princeton, Stanford, UMD)
Date 2024-06-06 (v1), 2025-02-26 (v6)
URL https://arxiv.org/abs/2406.06608
Type Systematic review / meta-analysis

Summary

Dimension Rating
Reliability High
Relevance Medium
Bias: Missing data Low risk
Bias: Measurement N/A
Bias: Selective reporting Low risk
Bias: Randomization N/A — not an RCT
Bias: Protocol deviation N/A — not an RCT
Bias: COI/Funding Low risk

Rationale

Dimension Rationale
Reliability PRISMA-based systematic review of 1,565 papers. Multi-institutional authorship with major AI labs. Six revisions over 8 months demonstrate ongoing quality control.
Relevance Provides the taxonomic framework for prompt engineering techniques but is primarily a survey rather than a study of counterproductive effects specifically. Medium relevance because it catalogs what exists rather than evaluating what fails.
Bias flags Low risk across the board. Multi-institutional authorship reduces single-entity bias. PRISMA methodology enforces transparent search and selection processes.

Evidence Extracts

Evidence ID Summary
SRC01-E01 Identification of 58 text-based prompting techniques categorized into 6 groups, establishing the taxonomy against which counterproductive findings can be mapped