Skip to content

R0044/2026-04-01/Q001/SRC06/E01

Research R0044 — Expanded Vocabulary Research
Run 2026-04-01
Query Q001
Source SRC06
Evidence SRC06-E01
Type Factual

Aviation regulators identify automation bias but address it through certification, not output behavior constraints

URL: https://www.faa.gov/aircraft/air_cert/step/roadmap_for_AI_safety_assurance

Extract

The FAA's AI safety assurance roadmap and EASA's certification framework address:

  • FAA considerations: Teams deploying AI/ML in aviation "need to consider reliability and robustness, explainability and transparency, data quality and bias, trust, overreliance, and automation bias, and continuous monitoring and maintenance."
  • EASA certification levels: Level 1 (human assistance) requires learning assurance, AI explainability, and continuous safety assessment. Level 2 adds ethics-based assessment and human-AI teaming requirements.
  • Joint standards work: The G34/WG114 aerospace standards committee is developing certification compliance means for ML in aircraft and air traffic management.

Aviation addresses automation bias primarily through: 1. Certification processes for AI components 2. Human-AI teaming design requirements (EASA Level 2) 3. Explainability and transparency requirements 4. Continuous monitoring and maintenance

No aviation standard requires the AI system to challenge pilot assumptions, express uncertainty in its output, or avoid reinforcing operator expectations.

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Contradicts Aviation has no system-side output behavior constraints
H2 Supports Aviation recognizes the problem (explicitly lists "trust, overreliance, and automation bias") but addresses it through certification and testing
H3 Supports Aviation-specific requirements are entirely process/human-side

Context

Aviation has the longest history of addressing automation bias (the term originated in aviation human factors research). However, aviation AI certification focuses on ensuring the system is correct (through testing and validation) rather than ensuring it behaves appropriately when uncertain (through output behavioral constraints). This is likely because traditional aviation automation is deterministic — the concept of an AI system "agreeing" with a pilot's expectations is more relevant to GenAI than to traditional aviation automation.