Skip to content

R0043/2026-04-01/Q001 — Query Definition

Query as Received

What terms do different industries and disciplines use to describe AI behavior that prioritizes user agreement, comfort, or satisfaction over accuracy, correctness, or safety? Map the complete vocabulary across: AI safety research, defense/military AI, healthcare AI, financial services AI, aviation/FAA, academic integrity, enterprise software evaluation, and UX/product design. For each domain, identify the specific terms used for the phenomenon that AI researchers call "sycophancy."

Query as Clarified

This is an open-ended vocabulary mapping exercise across eight named domains. The query asks for a comprehensive inventory of terminology used to describe the same underlying phenomenon — AI systems producing agreeable-but-wrong output — but the terms may differ substantially across domains because each community developed its language independently. The query does not assume every domain has a term; the absence of domain-specific vocabulary is itself a finding.

Embedded assumption: that the phenomenon AI researchers call "sycophancy" is the same phenomenon other industries describe under different names. This assumption is partially valid — the core behavior (prioritizing agreement over accuracy) overlaps, but the causal framing differs. AI safety researchers focus on model behavior; human factors researchers focus on human response to automation; regulated industries focus on output reliability. These are related but not identical framings.

BLUF

The phenomenon AI researchers call "sycophancy" maps to a rich vocabulary across domains, but no two fields use the same primary term. AI safety uses "sycophancy" and "sycophantic behavior"; human factors and aviation use "automation bias" and "automation complacency"; healthcare adds "acquiescence" and "deference"; defense uses "overtrust" and "miscalibrated trust"; financial services uses "model risk" and "effective challenge"; UX/product design frames it as a "satisfaction-accuracy tradeoff"; and academic integrity focuses on "confirmation bias amplification." A newer research term, "overreliance," is emerging as a cross-cutting concept. The vocabulary fragmentation is real and maps to genuinely different framings of the same underlying problem.

Scope

  • Domain: Cross-domain terminology analysis, AI behavior and human-AI interaction
  • Timeframe: Current as of April 2026, with historical roots to ~2001 (automation bias literature)
  • Testability: Verified by searching domain-specific literature and regulatory documents for terminology used to describe AI agreement-seeking or agreement-inducing behavior

Assessment Summary

Probability: N/A (open-ended query)

Confidence: Medium

Hypothesis outcome: Open-ended query without pre-enumerated hypotheses. The answer was synthesized from evidence across eight domains, with strong coverage in AI safety, aviation/human factors, and healthcare, and thinner coverage in academic integrity and enterprise software evaluation.

[Full assessment in assessment.md.]

Status

Field Value
Date created 2026-04-01
Date completed 2026-04-01
Researcher profile Phillip Moore
Prompt version Unified Research Methodology v1
Revisit by 2027-04-01
Revisit trigger New cross-domain taxonomy published; NIST or ISO adopts sycophancy-related terminology; major AI incident triggers regulatory vocabulary standardization