Skip to content

R0027/2026-03-26/Q001/SRC01

Research R0027 — Multilingual prompt engineering challenges
Run 2026-03-26
Query Q001
Search S01
Result S01-R01
Source SRC01

Vatsal et al. — Comprehensive survey of multilingual prompt engineering

Source

Field Value
Title Multilingual Prompt Engineering in Large Language Models: A Survey Across NLP Tasks
Publisher arXiv
Author(s) Shubham Vatsal, Harsh Dubey, Aditi Singh
Date 2025-05-16
URL https://arxiv.org/abs/2505.11665
Type Survey / review paper

Summary

Dimension Rating
Reliability Medium-High
Relevance High
Bias: Missing data Low risk
Bias: Measurement N/A
Bias: Selective reporting Some concerns
Bias: Randomization N/A — not an RCT
Bias: Protocol deviation N/A — not an RCT
Bias: COI/Funding Low risk

Rationale

Dimension Rationale
Reliability Comprehensive survey covering 36 papers, 39 techniques, 30 tasks, ~250 languages. Preprint not yet peer-reviewed, but methodology is transparent.
Relevance Directly addresses the core question of how prompt engineering varies across languages. Most comprehensive single source found.
Bias flags Some concerns about selective reporting — survey scope decisions could bias which techniques receive coverage. No specific performance comparison data reported (defers to individual papers).

Evidence Extracts

Evidence ID Summary
SRC01-E01 Survey scope: 36 papers, 39 techniques, 250 languages confirming substantial research activity
SRC01-E02 Finding that native-language prompts outperform English in specific tasks