WHOEVER RULES AI RULES THE WORLD

DI ILARIA DEL GRANDE
11/03/2026
Artificial intelligence is reshaping modern warfare at a pace that outstrips both regulation and public awareness. Through documented cases — from the IDF's use of AI systems in Gaza to the recent strikes on Iran — the piece traces how automation introduces a dangerous moral distance into life-and-death decisions. Structural risks such as the "black box effect," automation bias, and compressed decision-making timelines make accountability increasingly difficult to locate. Behind these technologies lies a deeply political landscape, where Pentagon contracts are shaped as much by campaign donations as by national security logic. When the decision to kill is distributed across code, data, and command chains, does responsibility still exist?
"'Warfare without risk critically undermines the meta-legal principles that underpin the right to kill in war.”
In his book A Theory of the Drone, Grégoire Chamayou warns against the danger of distancing ourselves form war — of turning killing into a remote, consequence-free act. Although he was writing about drones, the argument applies with equal force to artificial intelligence: the smarter the weapon, the further the operator stands from the moral weight of the decision.
This is no longer hypothetical. Artificial intelligence is now embedded in military planning, target selection, and battlefield simulation. It is not a new phenomenon, but its development is accelerating at a pace that outstrips bot regulation and public awareness. What is new is the scale — and the question of who remains accountable.
The new case of the rupture of contract between the department of defense an Anthropic illustrates how deeply political this landscape has become. The Trump administration labeled Claude — Anthropic's AI — a supply chain risk, a designation typically reserved for Chinese firms suspected of espionage. Within hours, the Pentagon signed a new agreement with OpenAI as a replacement. Notably, the new deal includes the same restrictions that reportedly caused the previous agreement to collapse: a prohibition on mass surveillance and autonomous weapons. The contradiction is difficult to ignore. One possible explanation lies outside technology altogether — Sam Altman, OpenAI's CEO, donated $25 million to a pro-Trump super PAC in 2024.
In the military field, AI operates within what are known as ISR systems — Intelligence, Surveillance, and Reconnaissance. This is the sector responsible for gathering and processing informations about the battlefield, harvesting information from satellites, manned aircrafts, unmanned aerial systems, maritime vessels and sensors.
Unmanned systems are a key part of this picture. Beyond being able to increase lethality and decrees costs, they also enhance ISR capabilities by intercepting communications, generating geospatial intelligence. Combined with AI it can recognize patterns — symbols, objects movements — that would appear random or insignificant to the human eye, particularly in fast-moving video footage.
Behind these systems stand private companies like Palantir Technologies which has inked a contract with the US government in defense, intelligence and immigrations enforcement worth up to $10 billion over the next decade. Peter Thiel’s company has built the software that guides weapons and machinery in missions and decisions. “Our software powers real-time, AI-driven decisions in critical government and commercial enterprises in the West, from the factory floors to the front lines.”
One of the contracts included with Palantir is Project Maven, it was launched in 2017, to assist drones in the identification of targets, but then Palantir integrated it under its wings. Maven is a platform that integrates satellite images, sensors and textual sources used by the pentagon and nato, which palantir is going to assist and expand.
In these days Palantir is facing the consequences of the ban of Anthropic from its Maven Smart Systems platforms, it will take months to unwind the AI software.
The deployment of AI in warfare is starting to invite reflection on the possibility of escalation and on the shared responsibilities of the governments operating it. As shown in various articles, in carrying out simulation models, AI escalates 95% of the times in nuclear actions, under pressure the strategic models are defaulting.
The risks are significant. Even taking into consideration that human officials retain the final word, AI by it’s design does’t give the possibility to trace or explain their algorithmic logic, this phenomenon is known as the “Black box effect”. This opacity is compounded by what researchers call automation bias: the tendency of human operators to place excessive trust in the outputs of AI, treating them as more neutral than human judgement. They are not. What we have to take into account is that when AI starts to recommend new targets they reflect social biases embedded in the data they were trained on.
The deeper danger is structural. AI is compressing decision-making timelines in ways that leave little room for legal and ethical review. The faster the system operates, the easier it becomes to sideline accountability.
We name it precision. What it produces is distance — from the target, from the decision, from accountability.
The deployment of AI in military operations is not an isolated phenomenon. In recent years it has surfaced across multiple conflicts: Ukraine, Gaza, Nagorno-Karabakh, Syria, Yemen, Libya, and Ethiopia — and most recently, Iran. Among these, the deployment of AI by the IDF in their genocidal actions against Gaza is the most extensively documented.
In November 2023, a report published by +972 Magazine and Local Call revealed the main AI systems used by the Israeli military in its operations against Palestine. Three systems in particular stand out: Lavender, The Gospel, and Where Is Daddy.
Lavender identifies military targets by fusing and cross-referencing data in a smart database. It operates through "positive unlabeled learning" — a technique that trains the algorithm by feeding it positive and negative examples, allowing it to recognize patterns and flag individuals as potential combatants. Human officials set the threshold that separates "terrorists" from civilians, but the system is already compromised: it has reportedly classified a Palestinian human rights organization as a terrorist group.
The Gospel functions similarly, using triangulated geolocation to identify military objects. It generates daily target scores across a range of sites — including tunnels and private family homes.
Where Is Daddy tracks the movements of Palestinian men by triangulating their mobile phones, identifying the moment they return to their family home in order to strike.
Palestine is not the end. The same logic, the same alliance, the same technology is now deployed against Iran. In the opening twelve hours of the Israeli-american offensive, over 900 strikes were carried out on Iranian targets. Despite the political fallout around Anthropic, the pentagon was still operating with Claude embedded in its systems. AI was deployed across intelligence gathering, target selection, and battlefield simulation. It did not directly control the weapons — the Tomahawk missiles were guided by conventional systems — but its contribution hides behind every layer of the planning process. A mistargeting error killed more than 150 school children. Nobody outside the Pentagon knows whether AI contributed to that decision.
The question then becomes: is any of this legal?
Since 2018, UN secretary-general Antonio Guterees has called for the prohibition of Lethal Autonomous Weapons Systems under international law. However, as of today, as long as AI-augmented systems pass testing and their use is consistent whit the two core principles of International Humanitarian Law their deployment remains legal.
These two principles are distinction and proportionality. AI systems must be capable of discriminating between civilians and combatants, and must not inflict damage excessive relative to the military advantage gained. Therefore, in theory, if deployed with reasonable procedures, AI could enhance this capacity — processing data faster than human analysts and reducing reliance on outdated intelligence. In practice, neither Lavender nor The Gospel generates collateral damage estimates to factor in proportionality. The International Committee of the Red Cross has identified AI as a facilitator of error-prone decisions and an accelerant of civilian harm.
When the decision to kill is distributed across code, data and command chains, does accountability still exist or has it simply become unlocatable?
Abraham, Yuval. "'Lavender': The AI Machine Directing Israel's Bombing Spree in Gaza." +972 Magazine and Local Call, April 3, 2024. https://www.972mag.com/lavender-ai-machine-bombing-gaza/.
Wells, Winthrop. "Battlefield Evidence in the Age of Artificial Intelligence-Enabled Warfare." Chicago Journal of International Law 26, no. 1 (2025).
Deveraux, Brennan. A Human-Centric Framework: Employment Principles for Lethal Autonomous Weapons. Monograph Series. Carlisle, PA: U.S. Army War College Press, 2026.
Servidio, Giuseppe. "Ipotesi AI di Anthropic dietro i raid in Iran, come il Comando Centrale USA potrebbe aver usato Claude per l'attacco." Geopop, March 3, 2026. https://www.geopop.it.
Shaikh, Kaif. "How AI-Powered Warfare in Iran Shrinks the Distance Between Data and Destruction." Interesting Engineering, March 3, 2026. https://interestingengineering.com.
Schmitt, Michael N. "The Gospel, Lavender, and the Law of Armed Conflict." Lieber Institute West Point — Articles of War, June 28, 2024. https://lieber.westpoint.edu.
Snow, Jackie. "OpenAI, Anthropic, and the Fog of AI War." Quartz, March 2026. https://qz.com.
OpenAI. "Our Agreement with the Department of War." OpenAI, February 28, 2026. https://openai.com/index/our-agreement-with-the-department-of-war/.
Human Rights Watch. "Questions and Answers: Israeli Military's Use of Digital Tools in Gaza." September 10, 2024. https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza.



