R0049/2026-03-31/Q001-SRC04-E01¶
Extract¶
Scott Roberts built three LLM-powered Streamlit web applications implementing structured analytic techniques from intelligence analysis:
- Starbursting — Question generation to explore topics and define problem scope (pre-analysis phase)
- Analysis of Competing Hypotheses (ACH) — Hypothesis evaluation through evidence scoring using multi-step LLM querying where later queries build on earlier responses (mid-analysis phase)
- Key Assumptions Check — Identifying underlying assumptions in finished intelligence products (post-analysis phase)
Built using OpenAI GPT-4, LangChain, and Pydantic. Source code published on GitHub. Roberts describes these as a "human-machine team" approach: LLMs augment but do not replace human analysts. The ACH tool exports CSV results for analyst modification and decision-making.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Partial support — implements multiple SATs but as separate tools, not a unified framework | Moderate |
| H2 | Contradicts — published prompt implementations of analytical techniques exist | Strong |
| H3 | Supports — three separate techniques rather than an integrated methodology | Strong |
Context¶
Roberts' work is the closest finding to the query's target. He implements three of Heuer's structured analytic techniques as LLM-powered tools, covering pre-analysis, mid-analysis, and post-analysis phases. However, the implementation is three separate applications rather than a unified methodology. Missing from the implementation: evidence quality scoring, calibrated probability language, bias assessment, search transparency logging, and self-audit mechanisms.
Notes¶
Roberts explicitly states: "LLMs are a great tool to help with SATs, but they are not a replacement for human analysts." He also notes: "An AI system doesn't have to be better than a human, just better than the best available human." These reflect a pragmatic, tool-oriented approach rather than a comprehensive-framework approach.