R0050/2026-03-31/Q003/SRC03/E01¶
Content moderation uses the misinformation/disinformation/malinformation categories as vocabulary but actual moderation remains unstructured.
Extract¶
The search results for content moderation reveal:
-
"One of the most common typological approaches is to classify false information based on the intent behind its creation and dissemination, distinguishing between misinformation (unintentional) and disinformation (intentional) as primary categories" — the taxonomy's language is adopted.
-
"Human moderation is challenging to scale and is relatively unstructured (in that different moderators may bring in their own biases and assess the same content differently)" — actual moderation does not use formalized classification procedures.
-
AI-enabled content moderation uses the categories as training labels but "may miss content nuances discerned by humans" — the taxonomy informs machine learning categories but is not a formal classification step in human-driven workflows.
The taxonomy has become the vocabulary of content moderation without becoming its procedure.
Relevance to Hypotheses¶
| Hypothesis | Relationship | Strength |
|---|---|---|
| H1 | Contradicts | Moderation uses the vocabulary but not as a formal procedure |
| H2 | Supports | No procedural implementation found |
| H3 | Supports | The categories are widely adopted as vocabulary and conceptual framing without being formalized into procedures |
Context¶
The distinction between vocabulary adoption and procedural implementation is the crux of this query. The Wardle/Derakhshan taxonomy has succeeded as a vocabulary contribution — it replaced "fake news" with a more nuanced three-category framework that is now standard usage. But vocabulary is not methodology. Content moderation platforms and fact-checking organizations use the terms without formalizing them into structured classification procedures.