SRC05-E01 — GAO Findings on Federal AI Training and Management¶
Extract¶
GAO found that "officials at five of the 12 selected federal agencies (DHS, DOD, DOE, NASA, and State) acknowledge that generative AI can produce biased outputs or hallucinations — outputs that seem plausible but are ultimately false." Eleven of 12 agencies have established specific guidelines on appropriate use, but "six agencies (Commerce, DHS, DOE, GSA, NSF, and State) reported difficulties in maintaining current and appropriate use policies for generative AI." Generative AI use cases increased nine-fold from 32 to 282 between 2023-2024. Federal guidance highlights "risk considerations when using generative AI, such as hallucinations, misinterpretations, deepfakes, and intellectual property infringements."
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Supports — agencies acknowledge risks and have guidelines | Moderate |
| H2 | Partially supports — only 5 of 12 agencies explicitly acknowledge hallucinations; half struggle with current policies | Moderate |
| H3 | Strongly supports — guidelines exist but are incomplete, struggling to keep pace, and vary widely across agencies | Strong |
Context¶
This GAO audit provides the most systematic evidence of the gap between AI adoption speed (9x growth) and training/policy adequacy. The finding that only 5 of 12 agencies explicitly acknowledge hallucinations is particularly striking.
Notes¶
The GAO report measures what agencies do, not what they should do, making it an unusually reliable evidence source. The rapid growth in AI use cases against a backdrop of policy struggles suggests training is lagging deployment.