
AI
ai
military-ai
autonomous-weapons
ethics
geopolitics
From Gaza's Lavender to the Iran strikes, AI isn't coming to the battlefield — it's already there, compressing kill chains, generating target lists, and forcing every company building on frontier models to reckon with what their technology enables.
Vivek Bhaugeerutty
March 30, 2026 · 8 min read · 25 views
While the AI industry spent 2024 and 2025 debating alignment benchmarks and whether GPT-5 would achieve AGI, a different kind of AI arms race was already underway — one measured not in benchmark scores but in body counts.
The Middle East has become the world's most consequential proving ground for military AI. Not the sci-fi kind with autonomous killer robots making independent decisions, but something arguably more dangerous: AI systems embedded deep into existing military kill chains, accelerating targeting decisions from weeks to minutes, expanding the pool of people who can be marked for death, and compressing the human oversight window to almost nothing.
If you build on AI, sell AI, or deploy AI in production — and I do all three — this story isn't optional reading. It's the story of what happens when your stack gets weaponized.
The Israeli military's AI targeting apparatus in Gaza rests on three systems that have been extensively documented by +972 Magazine, The Guardian, and Human Rights Watch.
The Gospel (Habsora) is an AI that automatically reviews surveillance data — imagery, signals intelligence, communications intercepts — and recommends physical structures for bombing. According to retired IDF chief Aviv Kohavi, the system could generate 100 bombing targets per day, whereas human analysts might produce 50 in a year. That's not an incremental improvement. That's a paradigm shift in the industrial capacity for destruction.
Lavender operates on people rather than buildings. It assigns every resident of Gaza a numerical score estimating the likelihood they're affiliated with Hamas or Palestinian Islamic Jihad. Within the first six weeks of the war, Lavender had generated approximately 37,000 target recommendations. The system carried a known 10% error rate — meaning thousands of civilians were potentially misidentified. According to intelligence sources who spoke to +972, human analysts spent roughly 20 seconds reviewing each target, primarily just confirming the person was male.
Where's Daddy? tracked targets via their phones until they returned home — then triggered the strike. The preferred approach was bombing people in their homes, at night, when families were present. For junior militants, pre-authorized collateral damage thresholds reportedly allowed 15 to 20 civilian casualties per target. For senior commanders, that number exceeded 100.
The result? An air campaign of unprecedented scale and speed. In the first two months alone, Israel struck roughly 25,000 targets — more than four times as many as previous Gaza wars. The pace that once took a team of 20 officers 250 days to produce was achieved by AI in a single week.
These systems don't run on air. They run on cloud infrastructure — specifically, American cloud infrastructure.
In 2021, Google and Amazon signed Project Nimbus, a $1.2 billion contract to provide the Israeli government with cloud computing and AI services. The contract terms were extraordinary: both companies agreed they could not restrict how Israel uses their products, even if that use violates their own terms of service. They also agreed to alert Israel if a foreign court requested access to Project Nimbus data.
Google publicly claimed the contract was limited to civilian workloads. But an internal report obtained by The Intercept in May 2025 revealed that Google knew it couldn't control how the technology would be used. A human rights consultancy hired by Google itself recommended withholding machine learning capabilities from the Israeli military — advice that was not followed.
Meanwhile, AP reporting found that the Israeli military's usage of Microsoft and OpenAI tools spiked to nearly 200 times pre-war levels by March 2024. Data stored on Microsoft servers doubled to over 13.6 petabytes. Microsoft's own internal review, concluded in September 2025, ultimately led the company to terminate certain contracts with the Israeli military.
The lesson here isn't subtle: cloud providers don't get to choose neutrality. When you provide the compute, you're part of the system — whether you acknowledge it or not.
This hits particularly close to home for anyone building on Claude (which I do — my company runs AI-for-code infrastructure for 2,200+ developers on Claude's API).
Anthropic partnered with Palantir and AWS in 2024 to bring Claude into classified government networks. By mid-2025, Claude was integrated into military workflows. Then in January 2026, Claude was reportedly used during the operation that captured Venezuela's former president Nicolás Maduro — the first documented use of a frontier AI model in such an operation.
What followed was a rapid escalation. The Pentagon pushed for unrestricted access to Claude for "all lawful uses." Anthropic refused to lift two red lines: no mass surveillance of U.S. citizens and no lethal autonomous warfare. Negotiations broke down. Defense Secretary Pete Hegseth designated Anthropic a supply chain risk. President Trump ordered the federal government to stop using Anthropic products.
As of late March 2026, a federal judge has blocked the Pentagon's ban, calling it illegal retaliation. But the damage is real — defense contractors have been told to drop Claude, HHS employees lost access to their chats and coding projects with hours of notice, and the entire incident has surfaced a question every AI company will eventually face: what are you willing to lose to maintain your principles?
OpenAI, for its part, quietly changed its terms of use in early 2024 to allow for "national security use cases." Google followed by removing language prohibiting AI use for weapons and surveillance from its ethics policy. The market is signaling that the path of least resistance is compliance.
The 2026 U.S.-Israel strikes on Iran took everything observed in Gaza and scaled it further. In the first 12 hours, nearly 900 strikes were executed on Iranian targets — an operational tempo that would have been physically impossible without AI-driven planning and target generation. AI systems reportedly compressed planning cycles from days to hours, processing drone feeds, satellite imagery, and signals intelligence simultaneously.
Iran's retaliation revealed the other side of this transformation. In the first week of Tehran's counter-campaign, drones accounted for approximately 71% of recorded strikes on Gulf states. The UAE alone reportedly faced over 1,400 detected drones in just eight days. Cheap, increasingly autonomous weapons are overturning the economics of combat entirely.
The UN General Assembly passed a historic resolution in November 2025 calling for a legally binding treaty on lethal autonomous weapons by 2026. 156 nations voted in favor. The United States, Russia, and Israel voted against. We are in what experts call the "pre-proliferation window" — the last moment before these weapons become as widespread and unregulated as small arms.
The conventional framing of AI ethics in our industry — bias in hiring algorithms, deepfakes, misinformation — feels almost quaint against this backdrop. The ethical frontier isn't whether your chatbot occasionally says something problematic. It's whether the model you fine-tune, the API you depend on, or the cloud you deploy to is part of a system that decides who lives and who dies.
A few things that should keep builders up at night:
The "human in the loop" is a fiction at scale. Every military deploying AI targeting insists on human oversight. In practice, intelligence sources describe 20-second reviews and rubber-stamp approvals. Automation bias — the tendency to trust machine outputs over independent judgment — is not a theoretical risk. It's been documented in active combat with lethal consequences.
Dual-use isn't an edge case — it's the default. Google Photos' facial recognition was reportedly used at military checkpoints. Microsoft's cloud computing stores surveillance intercepts. OpenAI's language models support intelligence analysis. The technologies we build for consumers and enterprises are the same ones being deployed in kill chains. There is no clean separation.
Your vendor choices are ethical choices. If you run production workloads on AWS, Google Cloud, or Azure, you're on infrastructure that has military contracts with active combatants. That's not a reason to panic — there may not be viable alternatives — but it's a reason to stop pretending the choice is purely technical.
Anthropic's standoff with the Pentagon is, whether you agree with their position or not, the first time a frontier AI company has publicly drawn a line and accepted serious consequences for it. That precedent matters. The alternative — a world where every model maker quietly adjusts their terms of service to accommodate military demand — is the world we'll get if nobody pushes back.
The algorithms of war are already here. The question isn't whether AI will reshape conflict — it already has. The question is whether the people building these systems will have any say in how they're used, or whether that decision has already been made for us.
ai ·military-ai ·autonomous-weapons ·ethics ·geopolitics