The Future of Autonomous Weapons: The Need for A Strict Liability Stopgap

The Future of Autonomous Weapons: The Need for A Strict Liability Stopgap

By Matt Nelson

A recent recommendation made by the U.S. National Security Commission for AI, led by former Google CEO Eric Schmidt, has urged the Biden Administration to reject any calls toward a ban on autonomous weapon systems.[1] Primarily, the panel asserts that the U.S. could use these robotic weapons in a safe and lawful manner.[2] This report is a stark contrast to the campaign led by tech leaders, human rights activists, and legal scholars who hope to impose an internationally recognized ban on these weapons.[3] Without U.S. leadership in this arena, a ban appears doomed.  The U.S. remains the world’s preeminent military power, and its general acceptance of these controversial weapon systems will inevitably permeate into military policy around the globe. If the U.S. embraces this recommendation, there needs to be a further examination of the applicability of existing legal doctrines to serve as a liability stopgap until the international community embraces regulation.

Because of the human-less nature of these weapon systems, a vicarious liability regime presents the most promise.[4] Some legal scholars have raised the possible applicability of the Command Responsibility Doctrine.[5] This customary international doctrine holds “commanders and other superiors criminally responsible for war crimes committed by their subordinates if they know, or had reason to know, that the subordinates were about to commit or were committing such crimes and did not take all necessary and reasonable measures in their power to prevent their commission, or if such crimes had been committed, to punish the persons responsible.”[6] The applicability to autonomous weapons however, falls short on the critical mens rea element. Due to the opacity of the decision-making process employed by these AI powered soldiers, satisfaction of the commander’s “knew” or “should have known” standards become practically impossible.

Although inapplicable in its current form, if the courts make a conscious decision to embrace a broader reading of the “should have known” standard, Command Responsibility could provide a temporary solution.[7] Such an interpretation would effectively create a strict liability regime and thus a responsibility link to the commander deploying these weapons. Although likely perceived as a draconian, there is a moral imperative of the international legal community to protect humanity from the whims of AI decision-making. In the unique context of autonomous weapons, the implementation of a strict liability using the Command Responsibility Doctrine is morally justified, legally possible, and should be seriously considered by the international legal community.

[1]Thomas Macaulay, National security commission led by ex-Google CEO urges US to ignore calls to ban autonomous weapons, The Next Web (Mar. 2, 2021) https://thenextweb.com/neural/2021/03/02/national-security-commission-led-by-ex-google-ceo-urges-us-to-ignore-calls-to-ban-autonomous-weapons/.

[2] Id.

[3] See generally Autonomous Weapons: An Open Letter from AI & Robotics Researchers (July 29, 2015) (released at International Joint Conference on Artificial Intelligence) https://futureoflife.org/open-letter-autonomous-weapons/?cn-reloaded=1; see also Bonnie Docherty Losing Humanity: The Case Against Killer Robots, Human Rights Watch (Nov. 2012) https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots#_ftnref3.

[4] DoD Directive 3000.09, “Autonomous Weapon Systems,” May 8, 2017.

[5] Diederek-Jan Posthuma, Autonomous Weapons Systems and Command Responsibility 17 (June 2018) (unpublished Master Thesis, Tilburg University) (on file with the University of Tilburg).

[6] See Rule 153. Command Responsibility for Failure to Prevent, Repress or Report War Crimes, Customary IHL, ICRC https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule153; see also Statute of the International Criminal Tribunal for the Former Yugoslavia, S.C. Res. 827 (Art. 7(3)), U.N. Doc. S/RES/827 (May 25, 1993)  (codifying liability on a ‘superior’ if “he knew or had reason to know that the subordinate was about to commit such acts or had done so and the superior failed to take the necessary and reasonable measures to prevent such acts or to punish the perpetrators thereof.”).

[7] Matthew Lippman, Conundrum of Armed Conflict: Criminal Defences to Violations of the Humanitarian Law of War 15(1) Dickinson Journal of International Law 1, 75, 89 (1996) (examining the strict liability interpretation of the should have known standard of the In Re Yamashita case).