Skip to content

R0020/2026-03-25/Q003/H1

Research R0020 — Prompt Engineering Gaps
Run 2026-03-25
Query Q003
Hypothesis H1

Statement

Mainstream prompt engineering guides explicitly recommend MUST/MUST NOT directives as a core technique for controlling AI behavior.

Status

Current: Partially supported

Evidence shows that imperative constraint language is used and discussed in mainstream guides, but the relationship is nuanced. Anthropic's documentation notably includes both strong constraint examples ("NEVER use ellipses," "CRITICAL: You MUST use this tool") and advice to dial back aggressive language for newer models. Industry guides actively recommend constraint-based design.

Supporting Evidence

Evidence Summary
SRC01-E01 Anthropic docs use imperative language in examples but advise moderating it for newer models
SRC02-E01 Lakera guide advocates constraint-based design and explicit formatting directives
SRC03-E01 Industry sources recommend "MUST" language and treating prompts like contracts

Contradicting Evidence

Evidence Summary
SRC01-E02 Anthropic advises replacing "CRITICAL: You MUST" with normal prompting for newer models

Reasoning

The evidence supports H1 in that imperative constraints are discussed and demonstrated. However, the latest Anthropic guidance suggests a trend away from aggressive enforcement language toward more conversational instruction. This nuance is better captured by H3.

Relationship to Other Hypotheses

H1 captures the current state (constraints are discussed) while H3 captures the emerging trend (moving from imperative to explanatory). H2 is clearly eliminated.