Skip to content

R0029/2026-03-27/Q001/SRC01/E01

Research R0029 — Plural Voice Attribution
Run 2026-03-27
Query Q001
Source SRC01
Evidence SRC01-E01
Type Statistical

Survey findings on attribution perceptions in human-AI co-creation

URL: https://dl.acm.org/doi/10.1145/3706598.3713522

Extract

Survey of 155 knowledge workers at a large technology company found:

  • AI partners received consistently lower authorship ratings than human partners for equivalent contributions. For equal writing contributions, AI was rated as secondary author (M=-0.67) while humans achieved equal authorship (M=-0.16).
  • Content contributions (new ideas, elaboration) warranted secondary or equal authorship, while form edits (spelling, grammar) received only acknowledgment.
  • Contribution type was rated most important factor (M=4.39-4.46), followed by amount and initiative.
  • When AI proactively generated complete text, it received more credit (M=2.48) than when requested (M=1.85).
  • Participants used multiple criteria: conceptual development, text production quality, originality, human responsibility/accountability, effort, and the "tool vs. collaborator" distinction.

The researchers conclude: "attribution requires granularity beyond binary disclosure policies" and call for "new attribution approaches that not only acknowledge the involvement of AI in co-created work, but also show how AI contributed."

Relevance to Hypotheses

Hypothesis Relationship Strength
H1 Supports Demonstrates structured research exists on attribution, but the paper itself calls for new frameworks — suggesting existing ones are inadequate
H2 Contradicts Directly shows that binary disclosure is insufficient and that structured attribution research is underway
H3 Supports The call for "new attribution approaches" confirms the field is pre-standardization

Context

The study was conducted at IBM, where participants had high AI literacy (63% used generative AI daily/weekly). This may overstate familiarity with attribution concepts compared to the general population.

Notes

This paper was presented at CHI 2025 in Yokohama. The same research team built the IBM AI Attribution Toolkit, making the paper both evidence of a framework and evidence of the need for frameworks.