Orla Mahon explores how AI is reshaping how wars are fought, and what the ethical implications of this may be.
AI is reshaping modern warfare. This is a reality that we have been told to confront repeatedly. But yet perhaps nothing more evidently presented this new reality to us than the recent strike on an Iranian school, killing at least 170 civilians - many of them school children.
This should never happen, fundamentally - civilians, such as children and their school teachers should never become targets in wars; this is a direct violation of the Geneva Convention. But, more accurately, this should never have happened - it has been claimed that the strike was a misfiring, conducted by AI software Claude.
The escalating conflict between the United States, Israel, and Iran has become a primary staging ground for the integration of AI into modern warfare. Since the US-Israeli offensive began on February 28, AI-assisted technology has been used to guide missiles and manage tactical operations in the Middle East. And yet, this is not a new development. The use of AI in warfare has long since moved past the hypothetical, long since moved beyond its restriction to popular culture. The Terminator series gave us ‘Skynet,’ a defence system that develops self-awareness and moves to exterminate humanity - this no longer feels like fiction, and instead feels like a stark warning of what our future may hold.
In the eightieth sitting of the United Nations General Assembly, which took place last October, the Secretary-General presented a report on the use of AI in military domains, and its implications for international peace and security. This report listed several ways in which AI has already begun to shape military operations: target analysis, generating strike recommendations; identification of individuals, which links military operations to databases of those connected to armed groups; autonomous navigation; defensive systems which can autonomous detect, track, and intercept perceived threats; and AI-assisted robots deployed in reconnaissance, logistics, and combat roles. All of these applications present some pertinent questions - how far can we outsource human intelligence to AI, especially in the utmost presentation of life-and-death scenarios? In an area like AI, where we already struggle to trust its outputs, how can we begin to feel at ease in applying it to complex battlefields, which already tests human judgement? Can we trust AI to rise above the fog of war?
The US military utilises large language models (LLMs) in its military operations - in logistical and office support, intelligence gathering and analysis, and decision support on the battlefield. A central component in these operations is the Maven Smart System, which uses AI for image processing and tactical support - for instance, Maven can speed up attack capabilities by suggesting and prioritising targets. According to reports, Maven has been used in previous conflicts - and indeed, in the attacks on Iran. Since 2024, the Maven system has been supported by Anthropic’s Claude under a $200-million contract.
It has been claimed that AI’s precision targeting has the potential to reduce civilian casualties during war. However, when we look towards conflicts where AI is used in military operations, such as Ukraine, Gaza, Iran - we see high civilian death tolls. Craig Jones, a political geographer at Newcastle University, stated in a comment to Nature, “There is no evidence that AI lowers civilian deaths or wrongful targeting decisions, and it may be that the opposite is true.”
“There is no evidence that AI lowers civilian deaths or wrongful targeting decisions, and it may be that the opposite is true.”
Israel’s artificial intelligence programme, ‘Lavender,’ has been used throughout its attacks on Gaza to mark potential bombing targets. Sources from the Israeli military have claimed that during the first weeks of Israel’s attacks, the Israeli army almost entirely relied upon Lavender - which flagged as many as 37,000 Palestinians as suspected militants — and their homes — for potential air strikes. +972 Magazine reports that in these first few weeks, human personnel largely only served as a “rubber stamp” for the decisions of the AI system, noting that military personnel “would personally devote only about “20 seconds” to each target before authorizing a bombing . . . This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.”
In January of this year, a memo was issued by the US Department of Defense (DoD) requiring that AI contracts allow for "any lawful use" without constraints. Anthropic, in response, released a statement on February 26th, asserting that they will not remove the safeguards on Claude preventing it from engaging in fully autonomous weaponry. (The statement, however, is quick to clarify that Claude remains available to be used in partially autonomous weaponry.) The statement warns that, “Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”
Returning to the United Nations General Assembly, the United Nations pointed to three major categories of risk in the use of the AI in warfare. The first is technological - if AI can only be as reliable as the data that it has on hand, it may behave in an unpredictable manner when it encounters situations that do not align with its training data. The risk of AI misidentifying a civilian as a combattent seems like a worrying gamble to take, especially amidst concerns of bias impacting AI’s functioning (particularly racial bias). Secondly, the United Nations highlighted security as another concern, noting that algorithmic intervention has the potential to speed up conflicts. Interaction between two militaries using AI has the potential to escalate a crisis without any human intervention. The third category of concern highlighted by the United Nations are those in the realm of legal and ethical concerns. International law has the ability to hold states and individuals responsible for atrocities committed in war - but what law can befall an AI system? As the Secretary-General’s report states, AI has the potential to “obfuscate the linearity of this process [of responsibility]." Is there any reality in which an AI agent can be put on trial for committing war crimes?
Perhaps Claude, therefore, presents the epitome of Hannah Ardent’s concept of the banality of evil - where Eichmann claimed that he was not at fault for his role in the Holocaust, as he ‘only obeyed ‘orders’,’ if Claude and its facilitators are ever brought on trial, the defence presented may echo similar sentiments. It could be argued that this is the military industrial complex’s ideal conception of war - no one person can ever be found to be at fault, if the burden of evil rests upon the shoulders of something concretely non-human. Perhaps this idea will help certain people sleep better at night.
At the UN Security Council in September 2025, António Costa, President of the European Council, stated that, “The development of lethal autonomous weapons systems threatens to remove human accountability from decisions of life and death. The risks are real: miscalculation, escalation, and proliferation. We must act before the tipping points become irreversible.”
“The development of lethal autonomous weapons systems threatens to remove human accountability from decisions of life and death."
Returning to the case of the bombing of the Iranian school, questions continue to arise on whether AI systems misidentified the school as a target. It has been reported that at least 17- civilians have been killed - many of those school children aged under 12 and their teachers. In a second bombing on the school, which occurred shortly after the initial bombing, first responders and parents of the children were hit as they rummaged through the rubble for survivors.
Whilst further details behind the authorisation of the strikes are yet to emerge, the involvement that AI may have had, and indeed to what extent, continues to be a topic of discussion. It could be argued that military operations have always run the risk of following inaccurate intelligence. Operation Igloo White, a surveillance effort on the Ho Chi Minh Trail conducted between 1967 and 1973, was often misled by Vietnamese troops deliberately triggering its sensors. Inaccurate intelligence caused the US military to strike the Chinese embassy in Belgrade, Serbia, in 1999. But can we argue that the bombing of the Iranian school can be chalked up to just another instance in a long series of intelligence failures of the US military, whether or not AI was used in its operations? In my own perspective, the use of AI in warfare feels uniquely sinister.
LLMs have begun to infiltrate every aspect of our lives - our classrooms, our workplaces, our relationships. Even simple Google searches now are first answered (often inaccurately) by Gemini, Google’s in-house LLM. I recently spoke to someone who told me they used ChatGPT to write a poem for their partner on Valentine's Day. And it is the same LLM systems now being used in modern warfare - it feels we are tangled in a blood-soaked web with nowhere to hide from the LLMs that now are ever-present in our day-to-day activities.
With its use in modern warfare, AI is now as integrated into death as it is into everyday life.
