R0043/2026-03-28/Q001/SRC08/E01¶
Aviation terminology gap: existing terms insufficient for AI
URL: https://www.mdpi.com/2673-7590/5/2/42
Extract¶
Aviation's established terminology: - Automation complacency: "The failure to be vigilant in monitoring automation prior to an automation failure" — a catch-all term cited as contributing factor in aviation accidents - Over-trust: Trusting automation beyond its actual capability - Situation awareness: Operator's perception and comprehension of the operational environment - Mental workload: Cognitive demands placed on operators by automation
Critical finding for vocabulary mapping: "As AI-based systems are introduced to support flight crews and air traffic controllers, taxonomies such as HFACS will likely need updating with new terms linked to human-AI interactions, as existing blanket terms like complacency and over-trust are probably not nuanced enough to capture the full transactional relationships between human crews and AI support systems."
JUDGMENT: Aviation researchers themselves recognize that their existing vocabulary is insufficient for AI. The terms "complacency" and "over-trust" were developed for deterministic automation (autopilots that do not adapt to please the pilot). AI systems that actively adjust their output based on operator expectations represent a qualitatively different phenomenon that the existing vocabulary does not capture.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Partially contradicts | Aviation has terms but its own researchers say they are insufficient |
| H2 | Contradicts | Terms exist, even if acknowledged as incomplete |
| H3 | Strongly supports | Aviation experts explicitly state the vocabulary gap exists and needs to be filled — the strongest evidence for H3 |
Context¶
HFACS (Human Factors Analysis and Classification System) is the standard taxonomy for aviation accident analysis. The call to update it for AI-specific terms is significant because it comes from within the aviation community, not from AI safety researchers.