R0027/2026-03-26/Q001/SRC06
Kmainasi et al. — Native vs non-native prompting in Arabic
Source
| Field |
Value |
| Title |
Native vs Non-Native Language Prompting: A Comparative Analysis |
| Publisher |
arXiv |
| Author(s) |
Mohamed Bayan Kmainasi et al. |
| Date |
2024-09 |
| URL |
https://arxiv.org/html/2409.07054v1 |
| Type |
Research paper |
Summary
| Dimension |
Rating |
| Reliability |
Medium-High |
| Relevance |
High |
| Bias: Missing data |
Low risk |
| Bias: Measurement |
Low risk |
| Bias: Selective reporting |
Low risk |
| Bias: Randomization |
N/A — not an RCT |
| Bias: Protocol deviation |
N/A — not an RCT |
| Bias: COI/Funding |
Low risk |
Rationale
| Dimension |
Rationale |
| Reliability |
197 experiments across 3 LLMs and 12 Arabic datasets. Multi-institution team (Qatar University, CMU Qatar, LJMU). |
| Relevance |
Directly compares native vs English prompts for Arabic — one of the languages named in Q001. |
| Bias flags |
Single-language study (Arabic only). Findings may not generalize to other language families. |
| Evidence ID |
Summary |
| SRC06-E01 |
English prompts outperform native Arabic prompts even on Arabic-centric models |