Skip to content

R0027/2026-03-26/Q001/SRC03

Research R0027 — Multilingual prompt engineering challenges
Run 2026-03-26
Query Q001
Search S01
Result S01-R03
Source SRC03

Huang et al. — Cross-Lingual-Thought Prompting (XLT)

Source

Field Value
Title Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting
Publisher ACL Anthology / EMNLP 2023 Findings
Author(s) Haoyang Huang, Tianyi Tang, Dongdong Zhang, Wayne Xin Zhao, Ting Song, Yan Xia, Furu Wei
Date 2023-10
URL https://aclanthology.org/2023.findings-emnlp.826/
Type Research paper (peer-reviewed)

Summary

Dimension Rating
Reliability High
Relevance High
Bias: Missing data Low risk
Bias: Measurement Low risk
Bias: Selective reporting Low risk
Bias: Randomization N/A — not an RCT
Bias: Protocol deviation N/A — not an RCT
Bias: COI/Funding Some concerns

Rationale

Dimension Rationale
Reliability Peer-reviewed at top NLP venue (EMNLP). 7 benchmarks, multiple languages. Highly cited in subsequent work.
Relevance Directly demonstrates that languages are treated unequally and proposes a specific prompting remedy with quantified improvements.
Bias flags Some concerns about COI — Microsoft Research affiliation may favor approaches compatible with their models.

Evidence Extracts

Evidence ID Summary
SRC03-E01 XLT achieves 10+ point average improvement, confirming language inequality in LLMs