Skip to content

R0043/2026-03-28/Q001 — Query Definition

Query as Received

What terms do different industries and disciplines use to describe AI behavior that prioritizes user agreement, comfort, or satisfaction over accuracy, correctness, or safety? Map the complete vocabulary across: AI safety research, defense/military AI, healthcare AI, financial services AI, aviation/FAA, academic integrity, enterprise software evaluation, and UX/product design. For each domain, identify the specific terms used for the phenomenon that AI researchers call "sycophancy."

Query as Clarified

  • Subject: The vocabulary used across eight specified domains to describe AI behavior that prioritizes user agreement over accuracy
  • Scope: Terminology mapping — identifying the domain-specific names for the phenomenon AI safety researchers call "sycophancy," including formal terms in standards/regulations, informal terms in practitioner discourse, and related but distinct concepts that overlap with sycophancy
  • Evidence basis: Published standards, regulatory documents, academic papers, industry frameworks, practitioner literature, and policy guidance from each domain
  • Temporal sensitivity: Focus on current (2024-2026) terminology, though some terms (automation bias, complacency) have decades of prior literature

Ambiguities Identified

  1. "Complete vocabulary" scope: The query asks to "map the complete vocabulary," but completeness is asymptotic. The research will aim for comprehensive coverage of established, documented terminology rather than exhaustive coverage of all informal usage.
  2. Boundary between sycophancy and adjacent phenomena: Some domains address user overreliance on AI (the human side) rather than AI agreeableness (the system side). Both are relevant but represent different causal directions. The research will map both but distinguish them.
  3. Domain boundaries are porous: Defense terminology influences aviation; healthcare terminology borrows from human factors engineering. The mapping will note cross-domain borrowing.

Sub-Questions

  1. What is the canonical definition and taxonomy of "sycophancy" in AI safety research?
  2. What terms does defense/military AI use for AI systems that prioritize operator agreement over accuracy?
  3. What terms does healthcare AI use for clinical decision support that fails to challenge clinician assumptions?
  4. What terms does financial services AI use for model outputs that prioritize user satisfaction over correctness?
  5. What terms does aviation/FAA use for automation that induces inappropriate operator trust?
  6. What terms does academic integrity literature use for AI that provides uncritical affirmation?
  7. What terms does enterprise software evaluation use for AI that prioritizes user satisfaction over accuracy?
  8. What terms does UX/product design use for chatbot behavior that confirms rather than challenges?
  9. Where do these vocabularies overlap, and where are they genuinely distinct concepts?

Hypotheses

ID Hypothesis Description
H1 Rich cross-domain vocabulary exists Each domain has developed its own specific terminology for the sycophancy phenomenon, creating a large but fragmented vocabulary
H2 No cross-domain vocabulary exists Domains outside AI safety have not named this phenomenon; "sycophancy" is the only term in use
H3 Partial vocabulary with systematic gaps Some domains have well-developed terminology while others use generic or borrowed terms, and the vocabularies address different facets of the same underlying problem