Skip to content

R0044/2026-04-01/Q001/SRC05/E01

Research R0044 — Expanded Vocabulary Research
Run 2026-04-01
Query Q001
Source SRC05
Evidence SRC05-E01
Type Factual

FINRA requires supervisory procedures and human review but no system output behavioral constraints

URL: https://www.finra.org/rules-guidance/key-topics/fintech/report/artificial-intelligence-in-the-securities-industry/key-challenges

Extract

FINRA's AI guidance addresses:

  • Model risk management: Validating inputs for bias, reviewing algorithms for errors, verifying parameters, ensuring output explainability.
  • Human review layer: Building "a layer of human review of the model outputs" for "black box" models.
  • Supervisory requirements: Under FINRA Rules 3110 and 3120, firms must maintain supervisory control systems and updated written supervisory procedures.
  • Guardrails: "Appropriate thresholds and guardrails, where ML models trigger actions autonomously."
  • Testing: "Extensive testing of applications" across scenarios with "fallback plans" if systems fail.

FINRA does not use vocabulary related to sycophancy, automation bias, overreliance, or assumption-reinforcing behavior. Requirements are framed as model governance and supervisory obligations — all human-side.

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Contradicts No system-side output behavior constraints exist in FINRA guidance
H2 Supports weakly Recognizes AI risks but addresses them through human oversight only
H3 Supports Financial services regulation is entirely human-side for this specific concern

Context

FINRA's guidance reflects the financial services industry's pre-GenAI regulatory framework. The 2026 FINRA Annual Regulatory Oversight Report does address GenAI-specific concerns including AI agents acting autonomously, but still frames solutions in terms of human supervision and auditability rather than constraining AI output behavior.