R0029/2026-03-27/Q005/H1¶
Statement¶
AI output misrepresentation is widespread and well-documented across academic, workplace, and software engineering contexts, with multiple large-scale surveys providing quantitative data.
Status¶
Current: Partially supported
Strong quantitative data exists for academic and workplace contexts. The KPMG study (48K+) finds 57% of workers hide AI use and present AI work as their own. Academic data shows 22% of college students admit to using ChatGPT despite believing it constitutes cheating, and nearly 7,000 UK university students were formally caught in 2023-24. However, software engineering-specific misrepresentation data is thin.
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | 57% of workers hide AI use and present AI work as own (48K+ sample) |
| SRC02-E01 | 22% of college students admit using ChatGPT despite believing it is cheating |
| SRC03-E01 | Nearly 7,000 UK students formally caught cheating with AI (2023-24) |
Contradicting Evidence¶
| Evidence | Summary |
|---|---|
| SRC04-E01 | Overall cheating rates unchanged (60-70%) pre- and post-ChatGPT, suggesting AI may substitute for rather than increase cheating |
Reasoning¶
H1 is mostly correct but overstates software engineering coverage. The academic and workplace data is robust and does show widespread misrepresentation. H3 more precisely captures the uneven evidence distribution.
Relationship to Other Hypotheses¶
H1 captures the academic and workplace reality but overstates completeness. H3 is more accurate about the software engineering gap.