Skip to content

R0021/2026-03-25/Q003/SRC01/E01

Research R0021 — Prompt engineering definitions
Run 2026-03-25
Query Q003
Source SRC01
Evidence SRC01-E01
Type Factual

OpenAI's six prompt engineering strategies and their measurability

URL: https://platform.openai.com/docs/guides/prompt-engineering

Extract

OpenAI's six high-level strategies:

  1. Write clear instructions — Subjective. No metric for "clear."
  2. Provide reference text — Structural (actionable but not quantifiable)
  3. Split complex tasks into simpler subtasks — Subjective. No metric for complexity threshold.
  4. Give the model time to "think" — Structural (chain-of-thought prompting)
  5. Use external tools — Structural (tool use architecture)
  6. Test changes systematically — Process recommendation. OpenAI states: "AI engineering is inherently an empirical discipline" and advises "building informative evals and iterating often."

Notable: OpenAI explicitly acknowledges the empirical (non-deterministic) nature of prompt engineering. No specific thresholds, success criteria, or quantifiable metrics are provided for any of the six strategies.

Quantifiable recommendations found: 0 out of 6 strategies include numerical thresholds.

One measurable recommendation: Pin production applications to specific model snapshots (e.g., "gpt-4.1-2025-04-14") — this is operational guidance rather than prompt engineering guidance.

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Contradicts Zero quantifiable recommendations among the six strategies
H2 Supports All six strategies use qualitative language
H3 Supports Some structural recommendations (tool use, reference text) are actionable without being quantifiable

Context

OpenAI's own description of prompt engineering as "inherently empirical" is itself significant evidence. The vendor most associated with "prompt engineering" does not provide quantifiable engineering specifications.