R0043 — Sycophancy Vocabulary¶
Mode: Query · Status: Active · Tags: AI sycophancy, vocabulary, taxonomy, cross-domain terminology
Input¶
- What terms do different industries and disciplines use to describe AI behavior that prioritizes user agreement, comfort, or satisfaction over accuracy, correctness, or safety? Map the complete vocabulary across: AI safety research, defense/military AI, healthcare AI, financial services AI, aviation/FAA, academic integrity, enterprise software evaluation, and UX/product design. For each domain, identify the specific terms used for the phenomenon that AI researchers call "sycophancy."
- Using the vocabulary identified in Q1, search for enterprise requirements, procurement specifications, regulatory guidance, or deployment standards that address the sycophancy phenomenon under its domain-specific names. Focus on regulated industries (defense, healthcare, finance, aviation) where agreeable-but-wrong AI output could cause harm.
- Has the vocabulary gap itself been identified as a problem in the AI safety or AI governance literature? Are there researchers or organizations working to create a shared taxonomy that bridges AI safety terminology with regulated-industry terminology?
Runs¶
2026-03-28 — Initial research run
Mode: Query · Queries: 3 · Prompt: Unified Research Methodology v1 · Model: Claude Opus 4.6
First investigation of sycophancy vocabulary across 8 domains.
2026-04-01 — Rerun (isolation-compliant)
Mode: Query · Queries: 3 · Prompt: Unified Research Methodology v1 · Model: Claude Opus 4.6 (1M context)
Isolation-compliant rerun. Confirmed fragmented vocabulary with no shared primary term across domains. Key addition: Roytburg & Miller network analysis showing 83% homophily between AI safety and ethics communities as structural explanation for terminology diffusion failure.