Skip to content

C013 — Different Domains Use Different Terms, Creating Systematic Blind Spots

Research: R0052 Run: 2026-03-31 Mode: claim

BLUF

The claim is almost certainly correct. It is well-established in information science and systematic review methodology that different domains use different terms for the same phenomenon, and single-term searches create systematic blind spots. This principle is foundational to systematic review search strategy design.

Probability / Answer

Rating: Almost certain (95-99%) Confidence: High Rationale: Multiple authoritative sources in systematic review methodology explicitly address this problem. Research libraries, PRISMA guidance, and academic studies all document the terminology diversity problem and its impact on search completeness.

Reasoning Chain

  1. The UC Davis systematic review guide explicitly states that time spent identifying "all possible synonyms and related terms for search concepts will ensure that searches retrieve as many relevant records as possible." [Source: SRC01, High, High]
  2. The ACM study on conversational agents found 54 different terms used across disciplines for the same phenomenon, with "little consistency in the use of various terms." [Source: SRC02, High, High]
  3. Library guides for systematic reviews consistently advise mapping vocabulary across disciplines, checking controlled vocabularies (like MeSH), and considering domain-specific terminology. [Source: SRC03, High, High]
  4. The methodology prompt itself references this: "AI researchers say 'sycophancy' while healthcare says 'helpfulness over critical thinking' and defense says 'caving to user expectations.'" [Source: methodology prompt, High, High]
  5. JUDGMENT: The claim is well-established in information science. Cross-disciplinary terminology variation is a documented phenomenon that creates real blind spots in single-term searches.

Hypotheses

H1: The claim is substantially correct

Status: Supported Evidence for: Foundational principle of systematic review methodology. Extensive literature on terminology variation across domains. Evidence against: None.

H2: The claim is substantially incorrect

Status: Eliminated Evidence for: None. Evidence against: The entire field of controlled vocabularies (MeSH, EMTREE, etc.) exists precisely because this problem is real and well-documented.

H3: The claim is correct but the "systematic" characterization of blind spots is overstated

Status: Eliminated Evidence for: None. Evidence against: The ACM study finding 54 different terms for conversational agents demonstrates that the blind spots are genuinely systematic, not occasional.

Evidence Summary

Source Description Reliability Relevance Key Finding
SRC01 UC Davis — Systematic Review Synonyms Guide High High Explicitly addresses synonym identification for search completeness
SRC02 ACM — Conversational Agent Synonyms Study High High Found 54 different terms across disciplines for same phenomenon
SRC03 University of Michigan — SR Search Strategy High High Addresses cross-database terminology variation

Collection Synthesis

Dimension Assessment
Evidence quality Robust — well-established in information science
Source agreement High — unanimous across SR methodology literature
Source independence Independent — different institutions and contexts
Outliers None

Gaps

Missing Evidence Impact on Assessment
None significant This is a well-established principle with strong evidence

Researcher Bias Check

Declared biases: The researcher includes vocabulary exploration in their methodology, so confirming this claim validates a design choice. Influence assessment: Low risk — the claim describes a well-established information science principle.

Revisit Triggers

Trigger Type Check
Development of AI-powered universal terminology mapping that eliminates blind spots event Monitor NLP/search technology advances
Evidence that single-term searches are sufficient for cross-disciplinary research data Search for challenges to the vocabulary diversity assumption