Skip to content

R0027/2026-03-26/Q001/S01

WebSearch — Academic research on multilingual prompt engineering effectiveness and cross-language LLM performance

Summary

Field Value
Source/Database WebSearch
Query terms "multilingual prompt engineering effectiveness research LLM non-English languages comparison 2024 2025"
Filters None
Results returned 10
Results selected 5
Results rejected 5

Selected Results

Result Title URL Rationale
S01-R01 Multilingual Prompt Engineering in Large Language Models: A Survey Across NLP Tasks https://arxiv.org/abs/2505.11665 Comprehensive survey of 36 papers covering 39 prompting techniques across 250 languages — directly addresses the query
S01-R02 Beyond English: The Impact of Prompt Translation Strategies across Languages and Tasks https://arxiv.org/html/2502.09331v1 Systematic evaluation of prompt translation strategies across 35 languages with quantified performance data
S01-R03 Not All Languages Are Created Equal in LLMs: Cross-Lingual-Thought Prompting https://openreview.net/forum?id=E4ebDehO3O Introduces XLT prompting method with 10+ point improvements; foundational work on cross-lingual prompting
S01-R04 Prompt engineering for low-resource languages https://portkey.ai/blog/prompt-engineering-for-low-resource-languages/ Practical techniques for low-resource language prompting with quantified results
S01-R05 Evaluation of open and closed-source LLMs for low-resource language with zero-shot, few-shot, and chain-of-thought prompting https://www.sciencedirect.com/science/article/pii/S2949719124000724 Peer-reviewed evaluation of prompting strategies for low-resource languages

Rejected Results

Result Title URL Rationale
S01-R06 Papers - Prompt Engineering Guide https://www.promptingguide.ai/papers Aggregator page, not primary research
S01-R07 Prompt Engineering and the Effectiveness of Large Language Models in Enhancing Human Productivity https://arxiv.org/html/2507.18638v2 Focuses on productivity not multilingual effectiveness
S01-R08 Promptomatix: An Automatic Prompt Optimization https://arxiv.org/pdf/2507.14241 Prompt optimization tool, not multilingual comparison
S01-R09 arxiv.org duplicate of S01-R01 https://arxiv.org/html/2505.11665v1 Duplicate of R01 in different format
S01-R10 Not All Languages Are Created Equal (duplicate listing) https://openreview.net/forum?id=E4ebDehO3O&noteId=OgctsH4sJl Duplicate of R03 with different anchor

Notes

Strong results for this search. The survey paper (R01) provides comprehensive coverage of the field. The prompt translation study (R02) provides the most directly relevant quantified comparison data.