R0041/2026-03-28/Q002/SRC06/E01¶
FINRA expects firms to assess AI compliance including controls for hallucinations and bias but does not mention sycophancy as a distinct risk category.
URL: https://www.finra.org/rules-guidance/guidance/reports/2026-finra-annual-regulatory-oversight-report/gen-ai
Extract¶
FINRA's 2026 oversight report expects firms to "establish governance frameworks to supervise GenAI usage, including controls that should address hallucinations, bias, cybersecurity risks, and ongoing human monitoring of model outputs." Model explainability is identified as "the biggest regulatory challenge," as some AI models operate as "black boxes." Advisory firms must "clearly disclose the actual role and limitations of their AI tools." The report does not mention sycophancy, user-pleasing behavior, or AI agreeability as distinct risk categories. The closest analogue is the requirement to address "bias" — which could encompass sycophantic bias toward user preferences.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Contradicts | Financial services regulation does not include sycophancy as a stated requirement |
| H2 | Supports | No evidence of sycophancy-specific requirements in financial services |
| H3 | Supports | FINRA addresses related concepts (hallucination, bias, explainability) but not sycophancy as a distinct category |
Context¶
FINRA's omission of sycophancy in its AI governance guidance is significant because financial advisory is a high-stakes context where sycophantic AI could cause direct financial harm (e.g., an AI confirming a client's desire to make a risky investment rather than providing balanced analysis).