R0021/2026-03-25/Q003/SRC02/E01¶
Anthropic's prompt engineering recommendations and measurability
URL: https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview
Extract¶
Anthropic's key recommendations:
- Use XML tags to structure prompts — Structural. Specific format recommendation but no quantifiable success criterion.
- Be clear and specific — Subjective. No metric for "clear."
- Think step by step — Structural technique (chain-of-thought).
- Use prompt chaining — Architectural pattern. Break complex tasks into subtasks.
- Set roles via system prompts — Structural technique.
- Handle large documents (20k+ tokens) with careful structure — Semi-quantifiable: specifies a token threshold.
- Use few-shot examples — Structural but no guidance on how many.
Quantifiable recommendations found: 1 semi-quantifiable (20k+ token threshold for large document handling). Remaining recommendations are structural or subjective.
Anthropic also provides an interactive tutorial with 9 chapters — the tutorial teaches techniques but does not provide measurable success criteria.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Contradicts | Almost no quantifiable recommendations |
| H2 | Supports | Predominantly qualitative and structural guidance |
| H3 | Supports | One semi-quantifiable threshold (20k tokens) amid qualitative guidance |
Context¶
Anthropic's guidance is notably more structural than OpenAI's (XML tags, prompt chaining), but equally non-quantifiable in terms of measurable outcomes.