SRC02-E01 — Three SATs Implemented via LLM¶
Description¶
Open-source implementation of three structured analytic techniques via LLMs using Streamlit, LangChain, and OpenAI GPT-4.
URL¶
https://sroberts.io/posts/llm-sats-ftw/
Extract¶
The implementation covers three SATs: (1) Starbursting — an Idea Generation SAT that develops questions about a topic; (2) Analysis of Competing Hypotheses (ACH) — generates hypotheses and evaluates evidence for/against each, with CSV export for human review; (3) Key Assumptions Check — identifies assumptions in finished intelligence products. Built with Streamlit (Python), OpenAI GPT-4, LangChain, and Pydantic. The author describes it as "basic, but effective" tooling designed for small teams.
Relevance to Hypotheses¶
| Hypothesis | Relevance | Strength |
|---|---|---|
| H1 — Complete prompts exist | Contradicts (only 3 of 66 SATs, not a system prompt) | Moderate |
| H2 — No such prompts exist | Contradicts (goes beyond narrow tasks) | Moderate |
| H3 — Partial implementations exist | Supports | Strong |
Context¶
FACT: The implementation is in Python code using LangChain, not as a standalone system prompt. FACT: It covers 3 of the 66 SATs cataloged by Heuer & Pherson. JUDGMENT: This represents the most directly relevant partial implementation for Q001, bridging intelligence analysis methodology with LLM capabilities.
Notes¶
The human-in-the-loop design (CSV export for analyst review) suggests the author recognized that full automation of analytical rigor is premature. The choice of ACH specifically demonstrates awareness of the intelligence analysis tradition.