"

9 The Genie is Out: is AI a Tool of War or Peace?

Izabela Pereira Watts[1]

 

Abstract

Artificial Intelligence (AI) has recently centred the debate over warfare, humanitarian law, human rights, and democracy. As if a genie got out of a lamp, national and international debates have recently spiralled, demanding regulation of what is still quite unknown regarding its benefits and its drawbacks. Based on the Israel-Gaza war, this article attempts to briefly analyse the recent impacts of AI on humanitarian law and war making. Is AI a tool of war or peace? The war in Gaza has been a “testing laboratory” of AI with disastrous humanitarian consequences. Under the narrative of “efficient war-making”, the use of AI software, like “Lavender”, helped the perpetration of genocide and misinformation campaigns. The article concludes that AI has great humanitarian potential to save lives. Nevertheless, under the triad of peace-war-democracy, international regulations come late but are crucial before the genie is commanded by the wrong master.

Keywords

Artificial Intelligence (AI), Peace, War, Genocide, International security, Israel, Gaza


AI, AGI and LAWS: concepts and misconceptions

In this article, Artificial Intelligence (AI) will only focus on its use as a weapon with Autonomous Technology (AT) in armed conflict (Surber, 2018). Under the goal of optimum military effectiveness, some Lethal Autonomous Weapons Systems (LAWS) are combined with AI (UN, 2023). The problem is that it imposes fundamental security, legal and ethical issues, particularly against International Humanitarian Law (IHL) principles. The militarisation of AI is more than developing robots, anti-drone systems, and cruise missiles (Garcia, 2019). The development and deployment of new technology have the potential to reshape the nature of warfare by turning life and death decisions over to autonomous drones equipped with artificial intelligence programs. A pivotal aspect is that AI softwares are inherently driven by technological bias, discrimination, and disinformation (Gruszczak, A. & Kaempf, S. 2023, Gray, 2025). Those AIs have some capacity for “thinking” and a high capacity for data collection. If not controlled, the fear is that they will soon transform into Artificial General Intelligence (AGI). Although this is still science-fiction, it would mean indeed autonomous stricto sensu;  able to think, reason, and act on its own without the need for any human input. The hype in itself causes harm through the deviation of focus on what is happening on the ground (Aljazeera, 2024c).

AI as a killing tool for Genocide: Lavender, The Gospel, and Where’s Daddy?

The conflict in Gaza has been the testing ground for Israeli Artificial Intelligence (Aljazeera, 2023b). Although elements of the same system, different AI software has been used since Operation Guardian of the Walls in 2021. First “Lavender” generates a “killing list” of thousands of targets at an unprecedented rate (Democracy Now, 2024). The software is designed to mark all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets. The algorithm identifies its targets based on subjective indications such as social media activity, or if they changed mobile phones recently. Out of almost 2.3 million residents of the Gaza Strip, the algorithm gives every single person in Gaza a rating from 1 to 100 (Aljazeera, 2023b). The execution of the “killing list” is done without any human check on accuracy, or why the machine made those choices, or to examine the raw intelligence data on which they were based. In some cases, a human official will dedicate mere seconds and only check whether it is a male, under the erroneous assumption that no female is directly or indirectly associated with the “enemy”. Moreover, it also dehumanises Palestinian man , as a pure statistical figure ready to be killed (Aljazeera, 2024c). Israel considers that Lavender is 90% accurate, thus, it means that it is acceptable that 10% will die for no reason, called “garbage targets” (Abraham, 2024). The algorithm decides who is to be killed (Aljazeera, 2024c). With 37 000 targets generated within the first 11 days, the civilian casualty varied from 1 to 20 for low ranks to 1 to 100 civilians for hypothetical high Hamas-rank, leading some international experts to define it as a “massive assassination machinery” (Abraham, 2023). The glorification of Lavender as a tool for a massive killing campaign is based on the fact that removes the friction between humans and the decision to kill by treating it as pure probabilistic statistical numbers (Aljazeera, 2023b).

Second, “Habsora” or “The Gospel” in English, is a largely built-on Israeli artificial intelligence that can “create” buildings and structures targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former Israeli intelligence officer, essentially facilitates a “mass assassination factory” (Abraham, 2023). “The emphasis is on damage and not on accuracy,” as stated by Israeli Defense Force (IDF) Spokesperson Daniel Hagari (The Guardian, 2023). It is worthwhile mentioning the “power targets” strategy that enables the use of automatic tools to target at a fast pace and bombard high buildings, electricity towers, media centres, schools, hospitals, and UN operating facilities as a strategy of “deterrence”, force evacuation, and power (Abraham, 2024). On the one hand, they are targeted, not collateral damage, causing human suffering (Abraham, 2023). On the one hand, they are targeted, not collateral damage, causing human suffering (Abraham, 2023). On the other hand, it helps gaining exponential power and a domestic image of “victory” to keep Netanyahu in office when facing unpopular protests at home and war criminal conviction by the International Criminal Court (ICC, 2024). Among other AI systems, there is also “Alchemist” that helps the automation of targets at an unprecedented rate (Israel Defense, 2021).

Beyond the overreliance on AI systems that target people and infrastructure, the Israeli Army also uses another software called “Where’s Daddy?” It systematically attacks the targeted individuals while they are in their homes — usually at night while their whole families are present — rather than during the course of military activity (Abraham, 2023, Aljazeera, 2024c). Finally, the army prefers to only use “unguided missiles”, in contrast to “smart” precision bombs, which can destroy entire buildings on top of their occupants and cause significant casualties. Those so-called “dumb bombs” comprise 45% of air-to-ground munitions used in Gaza (CNN, 2023a) and that is where the allegation of “undescriminating bombing” relies true, as also admitted by then-US president Joe Biden despite American historical political and military support to Israel (CNN, 2023b).

Thus, because of an AI program’s “decisions”, there is no “collateral damage”: the unprecedented number of civilians who were not involved in the fighting, particularly children, is not an error or flaw of the system, but on the contrary, it is its feature of effectiveness and success rate (Aljazeera, 2024c). That explains why since the “Operation Iron Swords,” 15 months after the event of 7 October 2023 (Jan 2025), almost 47 000 were killed in Gaza and the Occupied West Bank, including 18 000 children and more than 100 000 injured (Aljazeera, 2025b). Only a few Hamas military leaders have allegedly been killed. Thus, the war is not against “Hamas”, but through non-discriminatory killing automated by AI, it is against the Palestinian people. It is also a war on children. Before the war, 47% of the population were children. With almost 18 000 orphans and an unprecedented number of children amputated, blind or with permanent disabilities, the term “WCNSF” (wounded child, no surviving family) has taken over in hospitals. Children are marked as “unknown”. The UN estimates that some 40% of the people in Gaza have lost their identification cards and other documents, making it harder to identify unaccompanied children and reunite them with their families, or with no one to even bury them if they don’t survive. Left on their own, forced separation exposes children to various dangers and heightened risks of exploitation, neglect, and abuse (UNICEF, 2024). Therefore, an AI genocide. From a humanitarian perspective, it leaves the question if those automated systems are “intelligent” — or the opposite—unanswered.

The Big tech’s responsibility: money talks

The American Military–Industrial Complex (MIC) is not the only one to benefit from the American political and military support to Israel. It is fundamental to understand the links between the War in Gaza and the involvement in warfare by the Big Techs of Silicon Valley. For example, the “Where’s Daddy?”  AI system uses Google maps (Aljazeera, 2024c). The company’s policy states that its products can’t be used to cause immediate harm. Nevertheless, by allowing the Israeli army to use its products, Google is not only against its policy but also is part of the genocide. Facial recognition photo software to create “hit-list” also equipped Israel with the largest AI surveillance model (Aljazeera, 2024a). Lavender also uses data of WhatsApp, which according to Facebook is a private and secure one-to-one encrypted platform. OpenAI and Microsoft also are leading companies in AI and have had products such as ChatGPT used for warfare. Amazon AWS’s cloud service is also being used to store surveillance information on Gaza’s population, while procuring further AI tools from Google and Microsoft for military purposes (Braham, 2024b).

As an extension of US imperial power, US tech corporations are eager to support Israeli atrocities (Kwet 2024). Those companies increase their profits, and so the price of their market shares, as it is perceived to be militarily effective. Importantly, those companies also become potential political target partners for worldwide politicians to be elected if they are interested in reproducing the same warfare strategy, as per Trump’s new Office nomination. Consequently, private companies and AI are increasingly meddling in democratic processes either by collecting private data, or disseminating disinformation which directly impacts freedom of rights, power struggle, and checks and balances (Aljazeera, 2023c).

Algorithms have a license to kill with impunity (Gounaris and Kosteletos, 2020). “Killing Robots” don’t go to jail. So, not only are States accountable, but Tech companies are as well. As the most adept of the Liberal theoretical school of International Relations will agree, companies that create those technologies are the ones at the epicentre of power. So, the irony is that to mitigate the potential harms, the way to regulate is to rely on the goodwill of those who produce. This is a clear conflict of interest.

Power, Ethics and Humanitarian Law

AI is surrounded by political, ethical and humanitarian axes (Surber, 2018, Khan et al., 2022). As Vladimir Putin stated in 2017, “Whoever becomes the leader in this sphere (AI) will become the ruler of the world” (Lipton, 2023). From the Cold War and its space race, AI is the new “moon” or the last territory to be conquered. That explains the militarisation of AI and why it is a key piece for the future of warfare (Garcia, 2019, Gruszczak and Kaempf, 2023). It is a military power as much as it is a political and economic power tool. Particularly in Gaza, the use of AI in war-making is at the epicentre of a core debate of political philosophy: Does the end justify the means? Another major concern is the “black box” problem: programmers cannot know what a computer operating a weapons system empowered with AI will “learn” from the algorithm they use. It is unknown if, at the time of deployment, the system will comply with the prohibition on the use of force, the human right to life that applies in both war and peace or to distinguish a child holding a teddy bear from a soldier holding a rifle. Regardless of the “effectiveness” or “some improvement to be made”, the core ethical debate is that mechanised killing affronts human dignity (O’Connell, 2023). The principles of International Humanitarian Law are directly in jeopardy, such as proportionality and accountability. It is hard to believe that Artificial Moral Agents (AMAs) as components of the implementation of moral decision-making exist (Gonzalez, 2020). With the militarisation of AI, human moral agency and responsibility of jus in bellum  were substituted by mass target creation and lethality precisely due to the liability of “excess of effectiveness” (Fritz Allhoff et al., 2013).

AI to save lives

It is not a superficial debate between friends or foes. No doubt, Artificial Intelligence has great potential to help humanitarian aid in the field (The New Humanitarian, 2023). However, the issue is not on identifying the potential of how AI can be used to promote peace, sustainable development and help humanitarian save lives in the field. It is about acknowledging that the combined use of AI in Lethal Autonomous Weapons Systems has already been tested as a tool for genocide taking into consideration the unprecedented and exponential lethality seen in Gaza. Substantive efforts for its regulation have been made, including the Holy See, the Secretary General of the United Nations and its New Agenda for Peace as well as one of the fathers of AI, Yoshua Bengio (UN News, 2023, UN, 2023, Aljazeera, 2023a).  This is really one of the most significant inflection points for humanity,” as stated by Alexander Kmentt, Austria’s chief negotiator on the issue (UN, 2023). AI is already an irreversible tool in warfare. Therefore, international regulations come late but are crucial before the genie is commanded by the wrong master, or no master at all.


SDG Alignment

Nation states, global corporations and institutions are all wrestling with the double-edged sword of AI, and this rapidly emerging agenda is not explicitly present in the current UN SDG 2030 framework (which was developed a decade ago). We expect that this next iteration will have to address and pay attention to the ethical and moral questions and challenges this research poses.

SDG 16 – Peace, Justice and Strong Institutions

Target 16.1

Significantly reduce all forms of violence and related death rates everywhere

Target  16.a

Strengthen relevant national institutions, including through international cooperation, for building capacity at all levels, in particular in developing countries, to prevent violence and combat terrorism and crime

SDG 17 – Partnership for the Goals

 

References

ABRAHAM, Y. 2023. A mass assassination factory’: Inside Israel’s calculated bombing of Gaza. In: 972 Magazine. https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

ABRAHAM, Y. 2024. ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. In: 972 Magazine. https://www.972mag.com/lavender-ai-israeli-army-gaza/

ABRAHAM, Y 2024b. “Order from Amazon’: How tech giants are storing mass data for Israel’s war” In: 972 Magazine https://www.972mag.com/cloud-israeli-army-gaza-amazon-google-microsoft/

ALJAZEERA. 2023a. Is AI really coming for us all? | The Listening Post. Aljazeera, 10 Jul 2023.Available at https://www.youtube.com/watch?v=pPDrhncqs8Q

ALJAZEERA. 2023b. Israel turns to a new AI system in war on Gaza. Aljazeera. Available at https://www.youtube.com/watch?v=l1lCQV1MHtI

ALJAZEERA 2023c. Yoshua Bengio: Democracy is not safe in an AI world In: JAZEERA, T. T. A. (ed.) 13 Aug 2023 ed.: Aljazeera. Available at https://www.youtube.com/watch?v=GL8W6jW9dV4

ALJAZEERA 2024a. Google employees protest company’s ties with Israeli government. 17 April 2024 ed. Available at https://www.youtube.com/watch?v=Wg5OtAQveho&list=PLzGHKb8i9vTysJlqfhIyEieT2FqwITBEj

ALJAZEERA 2025b. Israel-Gaza war in maps and charts: Live tracker. Data from 07 oct 2023 to 20 jan 2025. https://www.aljazeera.com/news/longform/2023/10/9/israel-hamas-war-in-maps-and-charts-live-tracker

ALJAZEERA. 2024c. Israel’s shocking AI tools & Google’s complicity in Gaza Aljazeera, p.13 April 2024.Available at https://www.youtube.com/watch?v=cYQcT2Lv-y4

CNN. 2023a. Exclusive: Nearly half of the Israeli munitions dropped on Gaza are imprecise ‘dumb bombs,’ US intelligence assessment finds CNN. https://edition.cnn.com/2023/12/13/politics/intelligence-assessment-dumb-bombs-israel-gaza/index.html

CNN 2023b. Rifts between Biden and Netanyahu spill into public view By Kevin Liptak and Jeremy Diamond, CNN https://edition.cnn.com/2023/12/12/politics/biden-israel-losing-support-netanyahu/index.html

DEMOCRACY NOW 2024. Lavender & Where’s Daddy: How Israel Used AI to Form Kill Lists & Bomb Palestinians in Their Homes. Available at https://www.youtube.com/watch?v=4RmNJH4UN3s

FRITZ ALLHOFF, NICHOLAS G. EVANS & HENSCHKE, A. 2013. Routledge Handbook of Ethics and War Just War Theory in the 21st Century.

GARCIA, E. V. 2019. The militarization of artificial intelligence: a wake-up call for the Global South. SSRN.

Gray. Chris Hables. 2025. AI, Sacred Violence, and War―The Case of Gaza Hardcover  Palgrave

GONZALEZ, E. A. 2020. On the Ethical Dimension of Artificial Moral Agents Following the Principles of Jus In Bello (Justice in War). Academia.

GOUNARIS, A. & KOSTELETOS, G. 2020. Licensed to Kill: Autonomous Weapons as Persons and Moral Agents. Personhood, Hellenic -Serbian Philosophical Dialogue Series.

GRUSZCZAK, A. & KAEMPF, S. 2023. Routledge Handbook of the Future of Warfare, Routledge.

ICC. 2024. International Criminal Court. Netanyahu. Arrest warrant issued on 21 November 2024 https://www.icc-cpi.int/defendant/netanyahu

ISRAEL DEFENSE 2021. IDF used artificial intelligence to expose Hamas commanders, says top IDF commander. Israel defense. https://www.israeldefense.co.il/en/node/57246.

KHAN, A., KHAN, A. S. & KHAN, I. 2022. Responsability of Killer Robots for causing civilian Harm: A critique of AI application in Warfare doctrine. Pak. Journal of Int’L Affairs, 5.

KWET, M. 2024. How US Big Tech supports Israel’s AI-powered genocide and apartheid. In: Aljazeera. https://www.aljazeera.com/opinions/2024/5/12/how-us-big-tech-supports-israels-ai-powered-genocide-and-apartheid

LIPTON, E. 2023. A.I.-controlled killer drones become reality. The New York Times, Nov. 21, 2023. https://www.nytimes.com/2023/11/21/us/politics/ai-drones-war-law.html

O’CONNELL, M. E. B. 2023. Banning Autonomous Weapons: A Legal and Ethical Mandate. Ethics & International Affairs, 37, pp. 287–298.

SURBER, R. 2018. Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS) and Peace Time Threats. Zurich: ICT4Peace Foundation and Zurich HUb for Ethics and Technology.

THE GUARDIAN. 2023. The emphasis is on damage and not on accuracy,” said IDF Spokesperson Daniel Hagari on Oct. 9. The Guardian,, p.10 ct 2023. https://www.theguardian.com/world/2023/oct/10/right-now-it-is-one-day-at-a-time-life-on-israels-frontline-with-gaza

THE NEW HUMANITARIAN 2023. Four ways ChatGPT could help level the humanitarian playing field. The New Humanitarian. https://www.thenewhumanitarian.org/opinion/2023/03/20/ways-chatgpt-could-help-humanitarian-field

UN 2023. First Committee Approves New Resolution on Lethal Autonomous Weapons, as Speaker Warns ‘An Algorithm Must Not Be in Full Control of Decisions Involving Killing’. In: A/C.1/78/L.56, G. D. A. (ed.) 78 session, 28th meeting. United Nations. https://press.un.org/en/2023/gadis3731.doc.htm

UN NEWS. 2023. A ‘club of elites’ will not realise the multilateral future we need, Holy See tells UN Assembly. UN Affairs, p.26 sep 2023. https://news.un.org/en/story/2023/09/1141507

UNICEF. 2024. Gaza: The World’s Most Dangerous Place to be a Child https://www.unicef.org.au/media-release/gaza-the-world-s-most-dangerous-place-to-be-a-child?


How to Cite this Chapter

Watts, IP 2025, ‘The Genie is Out: is AI a Tool of War or Peace? ‘ In Boddington, E., Chandran, B., Dollin, J., Har, J. W., Hayes, K., Kofod, C., Salisbury, F., & Walton, L. (eds), in  Sustainable development without borders: Western Sydney University to the World, 2025 edn. Western Sydney University, Sydney. Available from https://doi.org/10.61588/TESB2263


Attribution

© 2024 Dr. Izabela Pereira Watts. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License https://creativecommons.org/licenses/by-nc-nd/4.0/deed.en. This means you can share this article in its original form, giving appropriate credit to the author, but you cannot use it commercially, modify it, or create derivatives from it.


  1. Izabela Pereira Watts, Western Sydney University, School of Social Sciences, Australia

Licence

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

The Genie is Out: is AI a Tool of War or Peace? Copyright © 2025 by Individual chapters by their respective authors is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.