R0041/2026-03-28/Q002/SRC05/E01¶
FAA AI safety roadmap establishes seven guiding principles for AI in aviation but does not specifically address sycophancy or AI agreeability.
URL: https://www.faa.gov/aircraft/air_cert/step/roadmap_for_AI_safety_assurance
Extract¶
The FAA Roadmap for AI Safety Assurance establishes seven guiding principles and requires that "the system designer must delineate the responsibilities assigned to human beings compared to the requirements assigned to systems and tools." The framework distinguishes between "learned AI" (static) and "learning AI" (adaptive), noting that learning AI "may adapt in a manner that degrades performance." All AI/ML technologies must meet certification requirements with rigor varying by criticality and risk. The framework does not mention sycophancy, agreeability, user-pleasing behavior, or confirmation bias in AI systems. The focus is on correctness, safety assurance, and human-AI responsibility boundaries.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Contradicts | Aviation's most important AI safety framework does not include sycophancy as a requirement |
| H2 | Supports | Aviation has not formalized sycophancy as a concern |
| H3 | Supports | The FAA addresses the underlying concerns (performance degradation, human-AI boundaries) but not sycophancy specifically |
Context¶
The FAA's omission of sycophancy is notable because aviation is among the most safety-critical sectors for AI deployment. If sycophancy were a recognized concern in aviation regulation, the FAA framework would be expected to address it.