The Use of Commercial AI Services in War Spotlights the Urgency in Implementing International Regulations on Military AI Applications

Paulo Frank

Artificial Intelligence (AI) has rapidly transformed methods of operation and has vastly augmented capabilities across numerous domains over the last few years.[1] An area of particularly salient concern is the encroachment of AI applications into military operations. In both the ongoing Russia-Ukraine and Israel-Palestine conflicts, drones equipped with AI technology have seen widespread use.[2] The AI applications used for military purposes were previously understood as being privately developed specifically for warfare, but it has recently come to light that Israel has been employing commercially available AI models (Open AI and Microsoft Azure) in its current conflict.[3] While initially Open AI prohibited military applications of its services through its terms of use, it lifted this self-imposed ban in 2024[4] and has since contracted with defense firm Anduril to support the US military.[5] These developments represent a turning point for the foreseeable applications of AI in warfare moving forward, opening the field to new players in the military AI space and highlighting the importance of reaching an international consensus on regulations for the use of AI in combat.

There is currently no “comprehensive global governance framework for military [AI],” leaving a troubling regulatory gap which threatens fundamental principles of international humanitarian law (IHL).[6] Significant threats posed by AI deployment in combat scenarios are to the principle of proportionality, as well as compliance with the nonbinding, yet guiding, principle of meaningful human control (MHC).[7] The proportionality requirement prohibits attacks that cause “collateral damage ‘excessive in relation to the concrete and direct military advantage anticipated.’”[8] Opponents of AI systems in combat point to the lack of human judgement in these applications as a fatal flaw in the ability to assess proportionality.[9] Indeed, making a proportionality determination is a highly subjective endeavor, requiring cognitive functions like judgment, empathy, and adaptability which current AI systems do not possess.[10] MHC is likewise threatened by increasing use of AI in warfare as, although AI systems have not yet been deployed without significant human control,[11] increasing integration of AI into critical functions of autonomous weapons systems “makes human control over specific uses of force decisions increasingly meaningless because of the speed, complexity, and opacity with which such systems operate.”[12] These particular characteristics of AI systems directly implicate additional IHL principles including predictability, reliability, transparency, and “accurate information on the outcome sought and on the context of use.”[13] This becomes abundantly clear when examining the inherent limitations of AI systems, which often use machine-learning programs that are inherently unpredictable due to their ability to self-adjust future conduct based on prior experiences without explicit programming to take specified future actions.[14] In short, without human control over final decisions, current limitations in AI technology will render actions taken by autonomous systems incompatible with numerous IHL principles.

Recognizing the importance of regulating the spreading influence of AI in military operations, the UN secretary general aims to achieve a legally binding treaty by 2026 that bans AI weapons systems “that operate without human control or oversight and cannot comply with international humanitarian law.”[15] The introduction of commercial AI platforms into the military AI space accentuates the gravity of quickly reaching an international consensus on rules for AI in war, as the integration of new AI systems into military operations and increasing reliance on AI throughout various weapons functionalities threatens a reduced role for human operators and implicates important principles of IHL.

[1] See Yongjun Xu et al., Artificial Intelligence is Restructuring a New World, 5 Innovation 1, 1–2 (2024) (editorial) (explaining how AI tools have touched various aspects of daily life); see also Anthony E. Davis, The Future of Law Firms (and Lawyers) in the Age of Artificial Intelligence, 27 Pro. Law. 3, 4 (2020) (discussing an AI application employed by JP Morgan which enables document review in mere seconds that would have otherwise taken 360,000 human-hours.)

[2] Kristina Humble, War, Artificial Intelligence, and the Future of Conflict, Geo. J. Int’l Affs. (July 12, 2024), https://gjia.georgetown.edu/2024/07/12/war-artificial-intelligence-and-the-future-of-conflict/ (noting that Ukraine has used AI-equipped drones to “autonomously identify terrain and military targets” and “launch successful attacks against Russian refineries,” and that Israel has similarly employed AI-equipped drones to identify nearly 40,000 Hamas targets).

[3] Michael Biesecker et al., As Israel Uses US-Made AI Models in War, Concerns Arise About Tech’s Role in Who Lives and Who Dies, AP News , https://apnews.com/article/israel-palestinians-ai-technology-737bc17af7b03e98c29cec4e15d0f108 (last updated Feb. 18, 2025, 6:06 AM CST) (calling Israel’s use of commercial AI models the “first confirmation” of such models being directly used for warfare).

[4]  Id.

[5]  Will Knight, Open AI is Working with Anduril to Supply the US Military with AI, Wired (Dec. 4, 2024, 4:01 PM), https://www.wired.com/story/openai-anduril-defense/.

[6] Raluca Csernatoni, Governing Military AI Amid a Geopolitical Minefield, Carnegie Endowment for Int’l Peace (July 17, 2024), https://carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield?lang=en; see also Shin-Shin Hua, Machine Learning Weapons and International Humanitarian Law: Rethinking Meaningful Human Control, 51 Geo. j. Int’l L. 117, 126 (2020) (highlighting the challenged posed to IHL as a question of how to “harness the potential” of AI systems to minimize civilian harm while prohibiting AI systems that are “dangerously unpredictable or inscrutable.”).

[7] Hua, supra note 6, at 129–32 (explaining the principles of proportionality and MHC in relation to AI deployment in warfare).

[8] Elliot Winter, The Compatibility of Autonomous Weapons with the Principles of International Humanitarian Law, J. Conflict & Sec. L., Jan. 2022, at 15.

[9] Id. at 3.

[10] Davis, supra note 1, at 6.

[11] Humble, supra note 2.

[12] Csernatoni, supra note 6.

[13] Hua, supra note 6, at 131 (highlighting the elements broadly covered by the MHC doctrine).

[14] Nadia Banteka, Artificially Intelligent Persons, 58 Hous. L. Rev. 537, 547 (2021) (explaining the machine-learning model and the fact that AI systems may reach unpredictable decisions that even their own developers cannot foresee).

[15] Csernatoni, supra note 6.