Skip to content

R0021/2026-03-25/Q003/SRC02/E01

Research R0021 — Prompt engineering definitions
Run 2026-03-25
Query Q003
Source SRC02
Evidence SRC02-E01
Type Factual

Anthropic's prompt engineering recommendations and measurability

URL: https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview

Extract

Anthropic's key recommendations:

  1. Use XML tags to structure prompts — Structural. Specific format recommendation but no quantifiable success criterion.
  2. Be clear and specific — Subjective. No metric for "clear."
  3. Think step by step — Structural technique (chain-of-thought).
  4. Use prompt chaining — Architectural pattern. Break complex tasks into subtasks.
  5. Set roles via system prompts — Structural technique.
  6. Handle large documents (20k+ tokens) with careful structure — Semi-quantifiable: specifies a token threshold.
  7. Use few-shot examples — Structural but no guidance on how many.

Quantifiable recommendations found: 1 semi-quantifiable (20k+ token threshold for large document handling). Remaining recommendations are structural or subjective.

Anthropic also provides an interactive tutorial with 9 chapters — the tutorial teaches techniques but does not provide measurable success criteria.

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Contradicts Almost no quantifiable recommendations
H2 Supports Predominantly qualitative and structural guidance
H3 Supports One semi-quantifiable threshold (20k tokens) amid qualitative guidance

Context

Anthropic's guidance is notably more structural than OpenAI's (XML tags, prompt chaining), but equally non-quantifiable in terms of measurable outcomes.