R0023/2026-03-25/Q002/SRC04
Anthropic prompt engineering guide — partially attributed (Rick Dakan, Joseph Feller)
Source
| Field |
Value |
| Title |
Prompt engineering overview (Anthropic Claude documentation) |
| Publisher |
Anthropic |
| Author(s) |
Rick Dakan, Joseph Feller, and Anthropic (partial attribution) |
| Date |
Ongoing (continuously updated) |
| URL |
https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview |
| Type |
Vendor documentation |
Summary
| Dimension |
Rating |
| Reliability |
Medium |
| Relevance |
High |
| Bias: Missing data |
Some concerns |
| Bias: Measurement |
N/A |
| Bias: Selective reporting |
Some concerns |
| Bias: Randomization |
N/A — not an RCT |
| Bias: Protocol deviation |
N/A — not an RCT |
| Bias: COI/Funding |
High risk |
Rationale
| Dimension |
Rationale |
| Reliability |
Partially attributed — Rick Dakan and Joseph Feller identified as authors. Includes an interactive tutorial. However, their backgrounds are not easily verified as AI researchers vs. documentation writers. |
| Relevance |
One of the three most influential vendor guides. Directly relevant to Q002. Includes templates like "You are an expert AI tax analyst" that empirical research has challenged. |
| Bias flags |
Same COI as OpenAI: Anthropic has commercial interest in making Claude appear effective with simple prompting techniques. |
| Evidence ID |
Summary |
| SRC04-E01 |
Anthropic guide partially attributed to Rick Dakan and Joseph Feller; recommends persona prompting challenged by research |