Something historic is happening right now in 2026. The United States and Iran are at war — and for the first time in history, Artificial Intelligence is playing a major role in how that war is being fought.
This is not science fiction. This is real. And it is changing everything we know about modern warfare.
In this post I am going to explain what is happening between the US and Iran, how AI is being used in this conflict, what Project Maven is, and why this matters for the entire world — all in the simplest words possible.
What Happened — The US Iran War Explained Simply
On 28 February 2026, the United States and Israel launched a military offensive against Iran. The strikes began suddenly and were massive in scale.
In just the first 24 hours, the US military struck over 1,000 targets inside Iran. To put that in perspective — that is more than one target every single minute for an entire day. That kind of speed was never possible before. Not without AI.
Since then the conflict has continued. The US has now struck over 11,000 targets inside Iran. At least 1,300 people have been killed according to Iranian officials. Thirteen American soldiers have also died. It is a serious and devastating conflict with real human consequences on all sides.
The reason the US could move so fast and hit so many targets in such a short time comes down to one thing — Artificial Intelligence.
What is AI Doing in This War?
Think of war like a very complicated game of chess. Before AI, planning each move took a very long time. Soldiers and analysts had to manually look at satellite images, read intelligence reports, cross-check locations, and decide whether a target was safe to hit or not. This could take hours or even days for a single target.
AI changed all of that.
The US military's top commander in the Middle East, Admiral Brad Cooper, explained it this way: "Our warfighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react."
He also added something important: "Humans will always make final decisions on what to shoot and what not to shoot and when to shoot. But advanced AI tools can turn processes that used to take hours and sometimes even days into seconds."
So in simple terms — AI is not pulling the trigger. Humans still decide whether to fire. But AI is doing all the background work of finding targets, analysing data, and presenting options to human commanders — at a speed no human team could ever match.
What is Project Maven — The AI Doing the Work
The main AI system the US military is using in this war is called Project Maven.
Think of Project Maven like Google Earth — but for war. Imagine a massive digital map of Iran showing every building, every road, every military base, every vehicle — all updated in real time using satellites, drones, and surveillance aircraft. Now imagine an AI system that can look at all of that information simultaneously, identify which locations are military targets, rank them by priority, and present the list to commanders instantly.
That is essentially what Project Maven does.
The Pentagon launched Project Maven back in 2017. Google was originally involved in building it — but over 3,000 Google employees signed a letter opposing the work, and Google pulled out. A company called Palantir then took over and has run the system ever since.
In the Iran war, Palantir's Maven system is being used alongside another AI tool — Claude, made by Anthropic — to analyse massive streams of battlefield data, summarise intelligence reports, and help commanders understand the situation on the ground in real time.
How AI Finds Targets — Step by Step
Here is how the AI targeting process works in simple steps:
Step 1 — Data Collection Satellites, drones, and spy planes collect thousands of images and videos every hour from inside Iran. This generates an enormous amount of data — far more than any team of humans could ever review manually.
Step 2 — AI Analysis Project Maven's AI scans all of this data automatically. It looks for military equipment, weapons storage facilities, radar systems, command centres, and other potential military targets. It identifies them, notes their exact coordinates, and flags them for human review.
Step 3 — Priority Ranking The AI then ranks these targets by priority — which ones are most important, which ones pose the biggest threat, which ones should be hit first. This ranking is presented to human commanders.
Step 4 — Human Decision Human military commanders review the AI's recommendations and make the final call. They decide what to hit, when to hit it, and how.
Step 5 — Strike The order is given and the military carries out the strike.
The AI speeds up steps 1, 2, and 3 dramatically — turning what used to take days into seconds. This is how the US was able to hit 1,000 targets in a single day.
The Controversy — When AI Gets It Wrong
Not everything about AI in warfare is impressive. There is a dark side to this story that cannot be ignored.
On 28 February 2026 — the very first day of the war — a US strike hit the Shajareh Tayyebeh girls' school in Minab, southern Iran. Over 170 people were killed. Most of them were children.
The school was located near an Iranian military base. The Pentagon is now investigating whether the AI targeting system made an error — whether it identified the school as a military target when it was not.
This is exactly the kind of mistake that experts have been warning about for years. AI systems can make errors. They can be trained on faulty data. They can fail to understand context. And in war, an AI error does not mean a wrong answer on a test — it means innocent people die.
Chatham House, the world-renowned international affairs think tank, described the situation this way: AI large language models work by predicting a sequence of words based on statistical probability — they will likely get it right most of the time, but they will not get it right all of the time.
In war, "most of the time" is not good enough.
The Anthropic Controversy — Claude AI in the War
Here is where the story gets even more interesting — and connects to something you might recognise.
Claude AI is made by Anthropic — the same company behind the AI assistant you can use at claude.ai. The US military was using Claude as part of its targeting system through Palantir's platform.
But Anthropic drew a hard line. The company refused to allow its AI to be used for fully autonomous weapons systems or mass surveillance. Anthropic CEO Dario Amodei said: "I cannot in good conscience accede to the Pentagon's request."
The Pentagon disagreed and the two sides fell out — leading to the famous QuitGPT movement where 1.5 million people cancelled their ChatGPT subscriptions and switched to Claude, because they saw Anthropic as the company that stood up for ethical AI.
It is an extraordinary situation. An AI company refused to do what the world's most powerful military asked — and millions of ordinary people rewarded that decision by switching to their product.
Iran's Response — AI vs AI
Iran is not sitting back while the US uses AI against them.
Iran has responded with an asymmetric cyber campaign. State-backed hacking groups have deployed ransomware attacks, denial-of-service attacks, and destructive wiper attacks designed to permanently destroy data on servers.
Iran has also been using drone technology aggressively. On 1 March 2026, an Iranian drone boat struck an oil tanker in the Gulf of Oman — the first confirmed state-led deployment of explosive drone boats against commercial shipping.
So on one side you have the US using AI to find and strike targets at unprecedented speed. On the other side you have Iran using AI-guided drones and cyber attacks to strike back. This is genuinely the world's first large scale AI vs AI conflict.
What the World is Saying
The use of AI in this war has sparked a global debate about where the line should be drawn.
China's Defense Ministry spokesperson said: "The unrestricted application of AI by the military — giving algorithms the power to determine life and death — not only erodes ethical restraints and accountability in wars but also risks technological runaway."
The United Nations passed a resolution in December 2025 on "Artificial intelligence in the military domain and its implications for international peace and security." A major international meeting on this topic is scheduled for June 2026.
The concern from experts is clear — AI makes war faster. But faster does not always mean more accurate. And in a school full of children, one AI mistake changes everything.
What This Means for the Future
The US Iran war is being called the world's first large scale AI war. And what happens here will shape how wars are fought for the next 50 years.
Here is what we can take from this:
AI makes war faster — The US struck 1,000 targets in 24 hours. That was impossible before AI. As AI improves, this speed will only increase.
AI can make mistakes — The school strike shows that AI targeting systems are not perfect. More oversight and human checks are needed.
AI companies have power now — Anthropic's refusal to allow Claude to be used for autonomous weapons showed that tech companies — not just governments — have a voice in how wars are fought.
Every country will want this technology — After seeing how effective AI targeting is in Iran, every major military power in the world will accelerate their own AI warfare programmes. This is an arms race — but for software.
The rules are not written yet — Nobody has figured out the ethical framework for AI in war. The June 2026 UN meeting is a start. But the technology is already ahead of the rules.
Simple Summary
| What | Details |
|---|---|
| War Started | 28 February 2026 |
| Countries | USA + Israel vs Iran |
| Targets Struck | 11,000+ by USA |
| AI System Used | Project Maven + Palantir + Claude |
| What AI Does | Finds targets, ranks priority, speeds decisions |
| Human Role | Final decision to strike always human |
| Biggest Controversy | Girls school strike killed 170+ people |
| Iran's Response | Cyber attacks + drone boats |
Final Thoughts
The US Iran war has shown the world something it cannot unsee — AI has arrived on the battlefield. And it changes everything.
This is not a story about robots fighting wars. Humans are still making the decisions. But AI is now the brain behind those decisions — processing information faster than any human ever could, finding targets in seconds that used to take days, and reshaping the entire pace and nature of modern conflict.
Whether that is a good thing or a terrifying thing depends entirely on how carefully the humans using that AI make their final decisions.
One school full of children in southern Iran is a reminder that speed without accuracy is not progress. It is tragedy.
The world is watching. And the rules for how AI can be used in war need to be written — urgently — before more mistakes are made that cannot be undone.
This article is based on verified reports from NBC News, NPR, Bloomberg, Washington Post, Al Jazeera, Chatham House, and Georgia Tech research. All information reflects the situation as of April 2026.
