Military AI has become a reality, and experts are sounding the alarm. The integration of Artificial Intelligence in the military domain raises significant concerns, including transparency, accountability, and bias, which are amplified in high-risk military contexts.
One of the primary worries is that AI systems can be used to autonomously select targets, leading to unintended civilian casualties. For instance, Israel's use of the AI-based "Lavender" system in Gaza has resulted in thousands of civilian deaths, despite a known error rate of 10%.
Experts also emphasize the need for human control and involvement in decisions concerning nuclear weapons employment. The REAIM 2024 summit, attended by nearly 100 countries, concluded with a non-binding declaration emphasizing the importance of maintaining human control in nuclear weapons decisions and ensuring AI applications comply with national and international law.
Moreover, the development and use of AI in military operations raise ethical concerns. As Lt. General P.C. Katoch (Retd) notes, "It is 'might is right' and 'everything is fair in war,' while the disinformation using AI enables flat denials as well as labeling false charges".
To address these concerns, experts recommend establishing clear guidelines and regulations for the development and use of AI in military operations. This includes ensuring transparency, accountability, and human oversight in AI decision-making processes.