Skip to content

R0044/2026-03-29/Q002/S03

WebSearch — Military AI overreliance and targeting harm

Summary

Field Value
Source/Database WebSearch
Query terms AI agreeable output military harm case study near-miss "target identification" OR "intelligence analysis" OR "decision support" overreliance trusted wrong
Filters None
Results returned 10
Results selected 3
Results rejected 7

Selected Results

Result Title URL Rationale
S03-R01 ICRC: Risks of AI in military targeting support https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/ Authoritative IHL analysis
S03-R02 Research on Military AI Risk Governance (Marvin Project) https://dl.acm.org/doi/full/10.1145/3748825.3748926 Quantified operator trust rates
S03-R03 Military AI Needs Technically-Informed Regulation https://arxiv.org/html/2505.18371v1 Over-trust and degraded vigilance analysis

Rejected Results

Result Title URL Rationale
S03-R04 Various military AI ethics articles Multiple URLs Policy analysis rather than empirical evidence or case studies

Notes

Military AI evidence is the hardest to obtain in specific incident form. The Marvin Project data is the most concrete, but is cited secondarily. Classified military incidents involving AI over-reliance would not appear in open-source searches.