R0044/2026-03-29/Q001/SRC05/E01¶
FINRA's GenAI guidance focuses entirely on human supervisory processes and organizational controls, with no requirements constraining AI system behavior to prevent automation bias or agreeable-but-incorrect output.
URL: https://www.finra.org/rules-guidance/guidance/reports/2026-finra-annual-regulatory-oversight-report/gen-ai
Extract¶
FINRA's guidance identifies hallucinations as a key risk: "instances where the model generates information that is inaccurate or misleading, yet is presented as factual information." However, this is framed as a risk firms should mitigate through supervision, not as a constraint on system design.
FINRA recommends "ongoing monitoring of prompts, responses and outputs to confirm the GenAI solution continues to perform as expected" and "validation and human-in-the-loop review of model outputs." Firms should establish "policies and procedures to develop, implement, use and monitor GenAI" with "comprehensive documentation."
FINRA's guidance emphasizes human oversight and validation rather than mandating that AI systems themselves incorporate safeguards against sycophancy or automation bias. The responsibility for identifying and correcting incorrect outputs rests with firms' supervisory processes and human reviewers, not with the AI systems. The report does not address whether systems should actively contradict users or resist plausible-but-false requests.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | N/A | FINRA does not provide system-side requirements — this is absence evidence rather than contradicting evidence |
| H2 | Supports | Financial services regulation is purely human-focused in its approach to AI risk management |
| H3 | Supports | Demonstrates that requirements are fragmented across sectors — financial services has not caught up to defense/aviation/healthcare in system-side requirements |
Context¶
FINRA's approach reflects the financial services industry's "technology-neutral" regulatory philosophy: the same supervisory obligations apply whether decisions come from humans or algorithms. This means no AI-specific system behavioral constraints exist.