SRC06 — Glean Enterprise Hallucinations — Scorecard¶
Source¶
| Field | Value |
|---|---|
| Title | Understanding LLM Hallucinations in Enterprise Applications |
| Publisher | Glean |
| Authors | Glean editorial |
| Date | 2025 (accessed 2026-03-29) |
| URL | https://www.glean.com/perspectives/when-llms-hallucinate-in-enterprise-contexts-and-how-contextual-grounding |
| Type | Enterprise software vendor content |
Summary Ratings¶
| Dimension | Rating |
|---|---|
| Reliability | Medium |
| Relevance | Medium-High |
| Missing data | Low |
| Measurement bias | Low |
| Selective reporting | Medium |
| Randomization | N/A |
| Protocol deviation | N/A |
| COI/Funding | Medium-High — Glean sells enterprise AI search with RAG grounding |
Rationale¶
| Dimension | Rationale |
|---|---|
| Reliability | Vendor content; technically accurate but oriented toward selling grounding solutions |
| Relevance | Represents how enterprise vendors explain hallucinations to clients |
| Bias | Glean sells RAG-based grounding as a hallucination solution, which shapes their framing |
Evidence Extracts¶
| Evidence | Summary |
|---|---|
| SRC06-E01 | Hallucinations in enterprise are "pattern-matching" not truth-seeking; presents contextual grounding as primary mitigation |