R0029/2026-03-27/Q005/H3¶
Statement¶
AI output misrepresentation is documented in academic and workplace contexts with quantitative data, but software engineering-specific data on misrepresentation (as opposed to general AI code tool usage) is sparse.
Status¶
Current: Supported
The evidence clearly shows two well-documented domains and one data gap:
- Workplace: KPMG/Melbourne (48K+) finds 57% of workers present AI work as own. This is the strongest single data point.
- Academic: BestColleges (22% admit use despite believing it's cheating), UK data (7,000 formal cases), Stanford (60-70% cheating rates unchanged).
- Software engineering: Extensive data on AI code tool usage (84% use AI tools, 42% of code AI-generated) but no surveys specifically measuring misrepresentation or undisclosed AI code submission.
Supporting Evidence¶
| Evidence | Summary |
|---|---|
| SRC01-E01 | Workplace: 57% hide AI use (strong data) |
| SRC02-E01 | Academic: 22% student admission rate |
| SRC03-E01 | Academic: 7,000 formal UK cases |
| SRC04-E01 | Academic: cheating rates unchanged pre/post-ChatGPT |
Contradicting Evidence¶
No evidence contradicts this hypothesis.
Reasoning¶
The evidence distribution is uneven but clear. Workplace and academic contexts have robust quantitative data. Software engineering has extensive adoption data but the "misrepresentation" framing does not apply the same way — in many engineering contexts, AI code assistance is expected or encouraged, so undisclosed use is culturally different from academic plagiarism.
Relationship to Other Hypotheses¶
H3 refines H1's overly broad claim by accurately characterizing the evidence gap in software engineering. H2 is eliminated.