AI and Dolus Specialis: Adapting Genocide Law to the Rise of AI-Powered Weaponry in International Conflicts
During the ongoing Israel-Hamas war in Gaza, the Israel Defense Forces (“IDF”) have used an artificial intelligence (“AI”) system, named “The Gospel,” to automatically generate bombing targets at an unprecedented rate.[i] One source explained to an Israeli investigative magazine, “We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”[ii] This development has occurred at a time when the IDF has loosened its standards for the number of acceptable civilian casualties.[iii] Another Israeli AI system, called “Lavender,” marks people for “kill lists.”[iv] Despite a reported ten percent error rate, military personnel merely serve as a “rubber stamp” for the AI’s decision, with no requirement to scrutinize the data Lavender uses to form its conclusion.[v]
Described as one of the heaviest bombing campaigns in the modern era, the IDF’s attacks on Gaza in the first two months of the war caused more destruction to the tiny enclave than the multi-year bombings of Syria’s Aleppo, Ukraine’s Mariupol, and, proportionally-speaking, Germany during World War II.[vi] Over 355,000 homes, more than 60 percent of Gaza’s housing stock, were damaged or destroyed.[vii] More than two years since the start of the war, the official death toll in Gaza is now over 75,000,[viii] with some estimates placing the true count significantly higher.[ix] The stunning level of death and destruction led South Africa to accuse Israel of committing genocide against the Palestinian people.[x] Interestingly, in its application to the International Court of Justice, South Africa noted the aforementioned reports of the Israeli military using AI for targeting.[xi] This raises an interesting, yet pressing question: how could genocidal intent be demonstrated from the use of AI targeting algorithms?
A finding of specific intent, referred to as dolus specialis, is the critical element required for a genocide conviction.[xii] This subjective element, seen as the “essence” of the crime, requires that genocidal acts are done with specific intent to destroy the targeted group in whole or in part.[xiii] There are two general approaches to establishing dolus specialis: (1) the purpose-based approach, which requires a showing that a “genocidal scheme” was willed by the defendant through “active choice,”[xiv] and (2) the prevention-focused knowledge-based approach, which finds genocidal intent if the actor knew their actions “were contributing to a wider genocidal plan” that could lead to the targeted group’s destruction.[xv] Although the knowledge-based approach allows for a wider range of conduct and actors to be held accountable for genocide,[xvi] the purpose-based approach has been the dominant method for determining dolus specialis.[xvii]
The emergence of AI-powered weaponry in war calls for a reexamination of the sufficiency of the purpose-based approach to genocide.[xviii] First, an algorithm does not have intent, motivation, or even thought in the human sense.[xix] While we may naturally refer to AI tools as “choosing” an outcome through their own agency, this is a misnomer that attributes human actions to inanimate processes.[xx] Moreover, it can be impossible to understand how an AI analyzes data and makes decisions, a phenomenon known as the “black box problem.”[xxi] The implication is that, legally speaking, little can be inferred about the intent of the humans that used or created AI.[xxii] Because the chain of causation between human and outcome is obscured by the machine, it may make more sense to shift to the knowledge-based approach and focus on prevention.[xxiii]
Shifting genocide jurisprudence to the knowledge-based framework would be an important step in adapting to the introduction of AI weaponry. Using the approach would not necessarily dilute the gravity of the crime of genocide, since using highly-destructive AI weapons systems while being substantially certain that living conditions destructive to the targeted group will result is still a high mental state to prove.[xxiv] This approach is also appropriate since it places state actors center stage and focuses on their awareness of their AI algorithms’ genocidal consequences, alerting others to the possibility of states hiding genocidal design behind military objectives and “neutral” technological processes.[xxv]
AI weapons have been described as revolutionary as early as 2016, akin to the development of gunpowder and nuclear bombs.[xxvi] Since many researchers, in the following years, have warned of “an international AI arms race,” immediate attention to the legal implications of their use is warranted.[xxvii] More recent developments, such as the Trump administration’s February 2026 clash with Anthropic over ethical limitations on their products’ military and surveillance uses,[xxviii] only highlight the urgency of these issues. In order to properly handle the challenges this technological revolution presents, international law must adjust accordingly – and quickly.
Cameron Mukerjee is a staff member of Fordham International Law Journal Volume XLIX.
[i] See Yuval Abraham, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza, +972 Magazine (Nov. 30, 2023), https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/.
[ii] Id.
[iii] See id. In one case, the IDF approved the assassination of a Hamas commander while knowing it would kill hundreds of Palestinian civilians. Id.
[iv] Yuval Abraham, ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza Inside Israel’s calculated bombing of Gaza, +972 Magazine (Apr. 3, 2024), https://www.972mag.com/lavender-ai-israeli-army-gaza/.
[v] Id.
[vi] Application of the Convention on the Prevention and Punishment of the Crime of Genocide in the Gaza Strip (South Africa v. Israel), Application Instituting Proceedings, 2023 I.C.J., at 24-25 (Dec. 29, 2023), https://www.icj-cij.org/sites/default/files/case-related/192/192-20231228-app-01-00-en.pdf.
[vii] Id. at 28.
[viii] Mohammad Mansour, Gaza death toll exceeds 75,000 as independent data verify loss, Al Jazeera (Feb. 18, 2026), https://www.aljazeera.com/features/2026/2/18/gaza-death-toll-exceeds-75000-as-independent-data-verify-loss.
[ix] See Khatib et al., Counting the dead in Gaza: difficult but essential, Lancet (2024) (arguing that it is not implausible that hundreds of thousands of deaths could be attributed to the war).
[x] See Application Instituting Proceedings (South Africa vs. Israel), 2023 I.C.J., at 6.
[xi] See id. at 78.
[xii] Karl Friberg, Intent in the Era of Mechanical Forces, at 13 (2025) (Master Thesis, Lund University), https://lup.lub.lu.se/student-papers/record/9195217/file/9195228.pdf.
[xiii] Id.
[xiv] Id. at 15.
[xv] Id. at 16.
[xvi] Id.
[xvii] Id. at 23.
[xviii] See id. at 24.
[xix] Robin Feldman & Kara Stein, AI Governance in the Financial Industry, 27 Stan. J. L. Bus. & Fin. 94, 105 (2022).
[xx] Id.
[xxi] Yavar Bathaee, Artificial Intelligence Black Box and the Failure of Intent and Causation, 31 Harv. J. L. & Tech. 889, 891 (2018).
[xxii] Id.
[xxiii] See Friberg, supra note xii, at 25.
[xxiv] See id. at 32.
[xxv] See id.
[xxvi] See Rebecca Crootof, War Torts: Accountability for Autonomous Weapons, 164 U. Pa. L. Rev. 1347, 1366 (2016).
[xxvii] See Elizabeth Fuzaylova, War Torts: Why A Limited Strict Liability Regime Should Be Implemented, 40 Cardozo L. Rev. 1327, 1338 (2019).
[xxviii] See Jo Ling Kent & Joe Walsh, Anthropic CEO says he’s sticking to AI “red lines” despite clash with Pentagon, Cbs News (Feb. 28, 2026), https://www.cbsnews.com/news/pentagon-anthropic-dario-amodei-cbs-news-interview-exclusive/.
This is a student blog post and in no way represents the views of the Fordham International Law Journal.