Skip to content

R0029/2026-03-27

Research R0029 — Plural Voice Attribution
Mode Query
Run date 2026-03-27
Queries 5
Prompt Unified Research Standard v1.0-draft
Model Claude Opus 4.6

Research run investigating AI attribution, public sentiment, journal policies, Kurosawa-Shakespeare adaptations, and AI output misrepresentation across academic, professional, and engineering contexts.

Queries

Q001 — AI Attribution Frameworks — Emerging but pre-standard

Query: Has anyone proposed a mechanism, standard, or framework for attributing AI contributions to collaborative human-AI work?

Answer: Multiple structured proposals exist (IBM AI Attribution Toolkit, AIA icon system, CHI 2025 research) but no standards body has adopted an AI-specific attribution standard. The field is in active proposal phase, pre-standardization.

Hypothesis Status Probability
H1: Multiple formal frameworks exist Partially supported
H2: Only ad hoc disclosure norms Eliminated
H3: Emerging but pre-standard Supported Likely (55-80%)

Sources: 4 | Searches: 2

Full analysis

Q002 — Public Sentiment on AI Content — Mixed and context-dependent

Query: What does current research say about public and technology community sentiment toward AI-generated content?

Answer: Deeply fragmented rather than uniformly positive or negative. Global trust at 46% (KPMG, 48K+), with dramatic split: 39% advanced economies vs. 57% emerging. Trust-use paradox: 66% use AI despite majority distrust. Slow positive trend (52% to 55%, 2022-2024).

Hypothesis Status Probability
H1: Predominantly negative Partially supported
H2: Predominantly positive Eliminated
H3: Mixed and context-dependent Supported

Sources: 3 | Searches: 2

Full analysis

Q003 — Journal AI Authorship Policies — Prohibition consensus, disclosure varies

Query: Have academic journals, conferences, or publishers issued formal policies on listing AI as a co-author or contributor?

Answer: Every major venue prohibits AI authorship — universal and absolute. Disclosure requirements form a spectrum from permissive (NeurIPS: methodology-only) through moderate (ACM/IEEE: acknowledgments) to strict (Science: full prompts in methods). All Big 5 publishers prohibit AI authorship.

Hypothesis Status Probability
H1: Universal prohibition + consensus Partially supported
H2: No policies or inconsistent Eliminated
H3: Prohibition consensus, disclosure varies Supported Almost certain (95-99%)

Sources: 6 | Searches: 2

Full analysis

Q004 — Kurosawa-Shakespeare Films — Three films confirmed

Query: Which Akira Kurosawa films were based on or inspired by Shakespeare plays?

Answer: Three films: Throne of Blood (1957, Macbeth), The Bad Sleep Well (1960, Hamlet), and Ran (1985, King Lear). Settled scholarly consensus. The Bad Sleep Well's Hamlet connection is the most contested but majority of critical sources include it.

Hypothesis Status Probability
H1: Three films (standard list) Supported Almost certain (95-99%)
H2: Two films only Partially supported
H3: More than three Eliminated

Sources: 3 | Searches: 1

Full analysis

Q005 — AI Output Misrepresentation — Documented in academic/workplace, sparse for SE

Query: Are there documented cases about people submitting AI-generated output as their own work?

Answer: Extensive data for workplace (57% hide AI use, KPMG 48K+) and academic contexts (22% student admission rate; 7,000 UK formal cases). Software engineering-specific misrepresentation data is sparse — AI code tool adoption is measured but "misrepresentation" framing does not map cleanly to engineering culture.

Hypothesis Status Probability
H1: Widespread, all contexts Partially supported
H2: Limited/anecdotal Eliminated
H3: Academic/workplace documented, SE sparse Supported

Sources: 4 | Searches: 2

Full analysis


Collection Analysis

Cross-Cutting Patterns

Pattern Queries Affected Significance
Principle-level consensus, implementation fragmentation Q001, Q003 Both attribution frameworks and journal policies show agreement on principles (AI is not an author; attribution is needed) but fragmentation on implementation details
Trust-use paradox Q002, Q005 People use AI despite not trusting it (Q002) and hide their AI use (Q005) — suggesting adoption outpaces governance and social norms
KPMG/Melbourne study as cross-cutting source Q002, Q005 The same 48K+ study provides key evidence for both public sentiment and workplace misrepresentation
Pre-standardization state of AI norms Q001, Q003, Q005 Attribution frameworks are emerging (Q001), disclosure policies vary (Q003), and misrepresentation is widespread (Q005) — a coherent picture of norms lagging adoption

Collection Statistics

Metric Value
Queries investigated 5
Answered with high confidence 4 (Q002, Q003, Q004, Q005)
Answered with medium confidence 1 (Q001)
H3 (nuanced) supported 4 of 5 queries
H1 (affirmative) supported 1 of 5 queries (Q004)
H2 (negative) eliminated 5 of 5 queries

Source Independence Assessment

The evidence base draws from genuinely diverse sources across domains: academic research (CHI 2025, Stanford), industry (IBM Research, KPMG/Melbourne), institutional policy (Nature, Science, ACM, IEEE, NeurIPS, Big 5 publishers), survey organizations (Ipsos), and cultural institutions (BFI, Criterion Collection). The main independence concern is the KPMG/Melbourne study serving as a key source for both Q002 and Q005, creating a single-study dependency for workplace data. No other comparable workplace survey was identified.

Collection Gaps

Gap Impact Mitigation
Software engineering misrepresentation data Cannot fully answer Q005 for all three requested contexts Targeted searches confirmed the gap; cultural framing difference noted
Non-Western perspectives on AI attribution All attribution frameworks (Q001) come from US/Western institutions Acknowledged; may limit generalizability
AI-generated content type-specific attitudes Q002 measures attitudes toward "AI" generally, not "AI-generated content" specifically Acknowledged; most relevant available data used
ICML-specific AI policy Q003 could not confirm ICML has AI-specific policy beyond inherited NeurIPS guidelines Direct page fetch attempted; limited content found

Collection Self-Audit

Domain Rating Notes
Eligibility criteria Low risk Consistent criteria applied across all 5 queries; defined before searching
Search comprehensiveness Low risk 15+ searches across all queries; ~150 results dispositioned
Evaluation consistency Low risk Same GRADE-adapted scoring framework applied to all 20 sources
Synthesis fairness Low risk All 15 hypotheses tested fairly; 4 of 5 H3 outcomes reflect genuine evidence nuance, not default

Resources

Summary

Metric Value
Queries investigated 5
Files produced 140
Sources scored 20
Evidence extracts 20
Results dispositioned 37 selected + 113 rejected = 150 total
Duration (wall clock) 29m 6s
Tool uses (total) 140

Tool Breakdown

Tool Uses Purpose
WebSearch 17 Search queries across all 5 topics
WebFetch 8 Page content retrieval for key sources
Write 68 File creation
Read 4 Methodology and output format reading
Edit 0 No edits needed
Bash 12 Directory creation, file generation, validation

Token Distribution

Category Tokens
Input (context) ~350,000
Output (generation) ~80,000
Total ~430,000