Skip to content

R0027/2026-03-26/Q001/SRC02

Research R0027 — Multilingual prompt engineering challenges
Run 2026-03-26
Query Q001
Search S01
Result S01-R02
Source SRC02

Mondshine et al. — Prompt translation strategies across 35 languages

Source

Field Value
Title Beyond English: The Impact of Prompt Translation Strategies across Languages and Tasks in Multilingual LLMs
Publisher arXiv / LoResMT 2025
Author(s) Itai Mondshine, Tzuf Paz-Argaman, Reut Tsarfaty
Date 2025-02
URL https://arxiv.org/html/2502.09331v1
Type Research paper

Summary

Dimension Rating
Reliability High
Relevance High
Bias: Missing data Low risk
Bias: Measurement Low risk
Bias: Selective reporting Low risk
Bias: Randomization N/A — not an RCT
Bias: Protocol deviation N/A — not an RCT
Bias: COI/Funding Low risk

Rationale

Dimension Rationale
Reliability Rigorous methodology: 35 languages, 4 tasks, 24 prompt configurations per language/task, multiple models. Bar-Ilan University affiliation.
Relevance Directly answers whether prompt language affects performance and quantifies the effect across high/medium/low resource languages.
Bias flags Low risk across domains. Transparent methodology with Association Rule Learning analysis.

Evidence Extracts

Evidence ID Summary
SRC02-E01 Selective pre-translation consistently outperforms full translation and native inference
SRC02-E02 Low-resource languages show 200%+ improvement with selective pre-translation