Skip to content

R0029/2026-03-27/Q005 — Assessment

BLUF

Submitting AI-generated output as one's own work is extensively documented in workplace and academic contexts. In the workplace, 57% of employees hide AI use and present AI work as their own (KPMG, 48K+ respondents). In academia, 22% of college students admit using ChatGPT despite believing it constitutes cheating, and nearly 7,000 UK students were formally caught in 2023-24 (triple the prior year). Software engineering-specific misrepresentation data is sparse — AI code tool usage is extensively measured (84% of developers use AI), but the "misrepresentation" framing does not map cleanly to engineering culture where AI assistance is often expected.

Probability

Rating: N/A — this is a descriptive question about evidence existence

Confidence in assessment: High

Confidence rationale: The workplace data comes from one of the largest AI behavior surveys ever conducted (48K+). Academic data comes from multiple independent sources (BestColleges, UK institutional data, Stanford longitudinal). The software engineering gap is a genuine absence, confirmed by targeted searching.

Reasoning Chain

  1. KPMG/Melbourne study (48K+ respondents, 47 countries) finds 57% of workers present AI work as their own. This is the single strongest data point. [SRC01-E01, High reliability, High relevance]
  2. BestColleges data shows 22% of college students admit using ChatGPT despite believing it's cheating. 43% have used AI tools for academic work. [SRC02-E01, Medium reliability, High relevance]
  3. UK institutional data shows 7,000 formal AI cheating cases in 2023-24, triple the prior year. [SRC03-E01, Medium-High reliability, High relevance]
  4. Stanford research shows overall cheating rates (60-70%) unchanged pre/post-ChatGPT, suggesting AI substitutes for existing cheating methods. [SRC04-E01, High reliability, Medium relevance]
  5. Software engineering: multiple searches found extensive data on AI code tool adoption (84% of developers, 42% of code AI-generated) but no surveys specifically measuring misrepresentation. This is JUDGMENT: the "misrepresentation" framing may not apply in engineering contexts where AI assistance is normalized.
  6. JUDGMENT: The evidence clearly documents the behavior in workplace and academic contexts. The software engineering gap is real but may reflect different cultural norms rather than absence of the behavior.

Evidence Base Summary

Source Description Reliability Relevance Key Finding
SRC01 KPMG/Melbourne workplace High High 57% hide AI use
SRC02 BestColleges student Medium High 22% admit despite believing it's cheating
SRC03 UK formal cases Medium-High High 7,000 cases (3x increase)
SRC04 Stanford longitudinal High Medium Cheating rates unchanged

Collection Synthesis

Dimension Assessment
Evidence quality Robust for workplace and academic; absent for software engineering
Source agreement High — all academic/workplace sources confirm the behavior exists
Source independence High — KPMG, BestColleges, UK institutions, Stanford are independent
Outliers Stanford's stable-rates finding is a useful counterpoint but not a contradiction

Detail

The workplace and academic evidence is strong and consistent. The KPMG 57% figure is particularly striking because it comes from a well-designed, large-scale survey. The Stanford finding that overall cheating rates are unchanged provides important nuance — AI is a new tool for an old behavior, not a new behavior.

Gaps

Missing Evidence Impact on Assessment
Software engineering misrepresentation surveys Cannot answer the query fully for all three requested contexts
Longitudinal workplace data (pre/post-ChatGPT) Cannot assess whether workplace misrepresentation increased or always existed
Non-Western academic data UK and US dominate; other countries' experiences unknown

Researcher Bias Check

Declared biases: No researcher profile provided for this run.

Influence assessment: The query frames AI use as "submitting as own work" — this framing embeds a negative judgment. In software engineering, AI code assistance is often expected, so the same behavior is not "misrepresentation." This framing bias was surfaced and addressed in the analysis.

Cross-References

Entity ID File
Hypotheses H1, H2, H3 hypotheses/
Sources SRC01, SRC02, SRC03, SRC04 sources/
ACH Matrix ach-matrix.md
Self-Audit self-audit.md