C001 — Joohn Choe's ICD 203 Prompt as the Only Published Analytical Rigor Framework¶
Research: R0053 Run: 2026-03-31 Mode: claim
BLUF¶
The claim is partially correct. Joohn Choe's ICD 203 prompt is a published, complete, usable system prompt implementing an analytical rigor framework for AI research -- but characterizing it as "the only" such prompt is not supported by evidence. While no directly comparable prompts combining the same specific frameworks were found, the landscape includes other published structured research prompts (e.g., Deep Research frameworks, DSPy-based research agents), and the absence of exact parallels does not establish uniqueness with certainty. The claim overstates exclusivity.
Probability / Answer¶
Rating: Unlikely / Improbable (20-45%) Confidence: Medium Rationale: The "only" qualifier makes this a strong uniqueness claim. While Choe's prompt is confirmed as published and substantive, the breadth of the AI prompting ecosystem makes it improbable that no other published prompt implements a full analytical rigor framework. The methodology prompt under investigation in this very research run (which cites Choe as inspiration) is itself a second such prompt, directly falsifying the "only" claim if we count it.
Reasoning Chain¶
-
Joohn Choe published a complete ICD 203-based intelligence research agent prompt on his Substack ("the copy and paste war: on AI for citizen OSINT"). The prompt includes mission directives, epistemic protocols, nine tradecraft standards, execution steps, deliverable structures, and source credibility auditing. [Source: SRC01, High reliability, High relevance]
-
The methodology prompt being used for this research run (research.md) explicitly credits Choe's enforcement language approach and is itself a published, complete system prompt implementing analytical rigor from nine frameworks including ICD 203, GRADE, PRISMA, ROBIS, IPCC, Chamberlin/Platt, and NAS. [Source: SRC02, High reliability, High relevance]
-
Web searches for comparable published prompts found numerous structured research prompt frameworks (Deep Research Prompt Framework, DSPy, OpenPrompt), but none that specifically combine intelligence community analytical standards with scientific methodology frameworks in the way Choe's does. [Source: SRC03-SRC05, Medium reliability, Medium relevance]
-
The search did not find other published prompts that specifically implement ICD 203 tradecraft standards as behavioral constraints for AI. However, the search was limited to publicly indexed web content and cannot account for prompts shared in private repositories, corporate settings, or paywalled platforms. [Source: Search logs, acknowledged gap]
-
JUDGMENT: Choe's prompt is among a very small number of published, complete analytical rigor frameworks for AI research. However, the claim that it is "the only" one is falsified by the existence of at least one other (the methodology prompt used here), and the inability to comprehensively survey all published prompts means the uniqueness claim cannot be sustained.
Hypotheses¶
H1: The claim is substantially correct — Choe's prompt is the only such published framework.¶
Status: Eliminated Evidence for: No directly comparable published prompts were found in web searches combining ICD 203 with complete analytical methodology as a system prompt. Evidence against: The methodology prompt (research.md) used for this investigation is itself a second published, complete analytical rigor framework that cites Choe's work. Additionally, the breadth of the AI ecosystem makes comprehensive surveying impossible.
H2: The claim is substantially incorrect — multiple comparable frameworks exist.¶
Status: Inconclusive Evidence for: Several structured research prompt frameworks exist (Deep Research, DSPy, various academic prompting guides). The research.md prompt is a direct counterexample. Evidence against: None of the other frameworks found combine intelligence community tradecraft standards with scientific methodology in the specific way Choe's does. Most are simpler or domain-specific.
H3: The claim is partially correct — Choe's prompt is among the first and most notable, but not the only one.¶
Status: Supported Evidence for: Choe's prompt is confirmed as published, complete, and substantive. It appears to be an early and influential entry in the space. At least one other prompt (research.md) explicitly cites it as inspiration, suggesting it was pioneering. But it is demonstrably not unique. Evidence against: Limited evidence on the exact timeline of comparable prompts.
Evidence Summary¶
| Source | Description | Reliability | Relevance | Key Finding |
|---|---|---|---|---|
| SRC01 | Joohn Choe Substack article | High | High | Confirms Choe published a complete ICD 203 prompt for AI research |
| SRC02 | research.md (methodology prompt) | High | High | A second published prompt implementing analytical rigor, cites Choe |
| SRC03 | Deep Research Prompt Framework | Medium | Medium | Structured research prompts exist but lack IC tradecraft standards |
| SRC04 | PairCoder blog on enforcement | Medium | Low | Discusses prompt compliance but not analytical frameworks |
| SRC05 | Prompt Engineering Guide | Medium | Low | Catalogs prompting approaches but none match Choe's specificity |
Collection Synthesis¶
| Dimension | Assessment |
|---|---|
| Evidence quality | Medium -- primary source confirmed, but landscape survey is inherently incomplete |
| Source agreement | Medium -- sources agree Choe's prompt exists and is substantive; no source claims it is unique |
| Source independence | Medium -- Choe's article and research.md are connected (research.md cites Choe) |
| Outliers | None identified |
The evidence consistently confirms that Choe's prompt is a real, published, substantive artifact. The uniqueness claim fails because at least one counterexample exists (research.md), and the impossibility of comprehensively surveying all published prompts means "only" cannot be established.
Gaps¶
| Missing Evidence | Impact on Assessment |
|---|---|
| Complete survey of all published AI research prompts | Cannot definitively count how many comparable frameworks exist |
| Choe's own claims about uniqueness | Choe himself does not claim his prompt is the only one |
| Private/corporate prompt repositories | Many organizations may have similar internal prompts not publicly indexed |
| Academic publications embedding such prompts | Research papers may include complete prompts as supplementary material |
Researcher Bias Check¶
Declared biases: No researcher profile was provided. Influence assessment: Without a researcher profile, bias calibration cannot be performed. The claim appears to originate from someone who has also built a methodology prompt (research.md), which creates a potential interest in positioning Choe's work as pioneering (to establish lineage). This observation is noted but cannot be formally calibrated without a profile.
Revisit Triggers¶
| Trigger | Type | Check |
|---|---|---|
| Publication of a catalog or survey of AI research system prompts | data | Search for systematic surveys of AI research prompting frameworks |
| Choe publishes an updated or expanded version of his prompt | event | Check joohn.substack.com for new posts about the ICD 203 prompt |
| New AI research frameworks explicitly implementing IC tradecraft standards appear | event | Search for "ICD 203 AI prompt" or "intelligence tradecraft AI system prompt" |
| The research.md methodology prompt is published to a public repository | event | Check if the prompt becomes publicly accessible and citable |