LIVE NEWS
  • Tehran says ‘progress’ made in talks with US
  • Google and OpenAI are making a run at Claude’s desktop moat, and Anthropic is making it easy
  • DeBriefed 17 April 2026: Fossil-fuel power slumps | ‘Super’ El Niño warning | Afghanistan’s climate struggle
  • Senior official ousted over Peter Mandelson security row to face MPs
  • Scientists say this type of olive oil could boost brain power
  • The Best Smart Home Accessories to Boost Your Curb Appeal (2026)
  • Trinidad and Tobago police uncover 56 bodies, mostly children, at cemetery | Crime News
  • The best TV antennas to buy in 2024
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • See More
    • Artificial Intelligence
    • Climate Risks
    • Defense
    • Healthcare Innovation
    • Science
    • Technology
    • World
Prime Reports
  • Home
  • Popular Now
  • Crypto
  • Cybersecurity
  • Economy
  • Geopolitics
  • Global Markets
  • Politics
  • Artificial Intelligence
  • Climate Risks
  • Defense
  • Healthcare Innovation
  • Science
  • Technology
  • World
Home»Defense»Deadly Iran school strike casts shadow over Pentagon’s AI targeting push
Defense

Deadly Iran school strike casts shadow over Pentagon’s AI targeting push

primereportsBy primereportsMarch 24, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Deadly Iran school strike casts shadow over Pentagon’s AI targeting push
Share
Facebook Twitter LinkedIn Pinterest Email


KYIV, Ukraine — On the first day of the U.S.-Iran war, a Tomahawk cruise missile struck Shajareh Tayyebeh elementary school in Minab, southern Iran. At least 168 people were killed — more than 100 of them under the age of 12, according to UN and Iranian officials.

The school building sat fewer than 100 yards from a long-time Islamic Revolutionary Guard Corps naval installation and was previously located within the IRGC compound perimeter until a wall appeared between 2013 and 2016, according to an analysis of satellite imagery by Amnesty International.

By the time the U.S. and Israel launched their first strikes on Feb. 28, the school had been established several years prior. It was active on social media and had its own website, a Reuters investigation found.

So what went wrong?

“Was artificial intelligence, including the use of the Maven Smart System, used to identify the Shajareh Tayyebeh school as a target?” more than 120 House Democrats asked in a March 12 letter to the Pentagon, just days after 46 Senate Democrats sent a similar request demanding clarity on the deadly hit.

The Maven Smart System, a targeting and intelligence platform built by data analytics company Palantir Technologies under a $1.3 billion Pentagon contract, was built to solve a problem that has grown exponentially in recent years: information overload — with artificial intelligence as its secret weapon.

Maven fuses satellite imagery, drone feeds, radar data and signals intelligence into a single interface, then classifies targets, recommends weapons systems and generates strike packages in near real time, compressing kill-chain reasoning and decision making into the fastest timelines ever seen on the battlefield.

And it uses Anthropic’s Claude AI model, embedded in its system, to semi-autonomously rank targets by strategic importance, drafting automated legal justifications for each strike along the way.

The software generated hundreds of strike coordinates in the first 24 hours of the Iran campaign, enabling the U.S. to hit more than 1,000 targets in the first 24 hours of the war, according to The Washington Post.

After sources briefed on preliminary findings told CNN that U.S. Central Command had created targeting coordinates using outdated intelligence provided by the Defense Intelligence Agency that had not been updated to reflect the school’s presence, one question became central to the inquiries: “If so, did a human verify the accuracy of this target?” they asked.

They are still waiting for an official explanation.

Ukrainian drone operators who build and deploy semi-autonomous targeting systems on the front line told Military Times they recognized the likely culprit immediately.

Ihor Matviyuk, the director of Aero Center, a Ukrainian drone company that builds and deploys semi-autonomous drones on the front lines of the war with Russia, said he can imagine exactly how the failure happened.

Although he has no inside knowledge of the Minab strike specifically, earlier this month he said that it bears the hallmarks of a targeting failure — not an AI malfunction.

“It was almost definitely a strike on the [given] coordinates,” Matviyuk told Military Times. “The main problem was not the AI — it was how close the military object was to the school.”

Last week, former military officials speaking to Semafor confirmed Matviyuk’s early assessment: “Humans — not AI — are to blame” for the school strike, they said, pointing to stale human-curated data fed to the Pentagon’s Maven targeting platform.

Matviyuk recognized the pattern because he’s had to decide how much AI to use in his own semi-autonomous weapon systems again and again as drone warfare and software capabilities have rapidly evolved on Ukraine’s battlefield.

“Automatic targeting allows us to capture less than half of the targets, not more,” Matviyuk said. “Because they are all still camouflaged.”

Deadly Iran school strike casts shadow over Pentagon’s AI targeting push
Ukrainian soldiers train with drones at an undisclosed location in the Donetsk Oblast, Ukraine, September 2025. (Diego Herrera Carcedo/Anadolu via Getty Images)

The Defense Department’s own data bears that out. Maven can correctly identify objects at roughly 60% accuracy overall — compared with 84% for human analysts.

But that rate drops below 30% in adverse conditions, such as bad weather or poor visibility, according to Pentagon data published in a 2024 Bloomberg report.

The risk of “collateral damage,” as the strike on the Minab school might be categorized in military terminology, is too high — that is why Aero Center and every other Ukrainian drone company that spoke with Military Times says they always leave the final strike decision to a human operator.

“The direct impact is always carried out by the operator’s command,” Matviyuk said, “to prevent civilians from getting under the blow.”

In 2021, an experimental U.S. Air Force targeting AI scored roughly 25% accuracy in real conditions, despite rating its own confidence at 90%, then-Maj. Gen. Daniel Simpson, the Air Force’s assistant deputy chief of staff for intelligence, surveillance, and reconnaissance, told Defense One.

“It was confidently wrong,” Simpson said, summing up the program’s problems. “And that’s not the algorithm’s fault. It’s because we fed it the wrong training data.”

The situation is not expected to improve. Last month, Hegseth slashed the Civilian Protection Center of Excellence workforce by approximately 90% and cut CENTCOM’s civilian casualty assessment team from 10 to one, Politico reported.

Then, after leaving a skeleton staff to oversee the guardrails of the biggest expansion of AI in the military, Deputy Secretary of Defense Steve Feinberg signed a memo earlier this month formalizing AI’s role in military decision making — designating Maven an official program of record and pushing adoption across all U.S. military branches by September, Reuters reported on Friday.

RELATED

Ukrainian weapon makers like Matviyuk are not shying away from giving AI more autonomy, but they’re using it strategically.

Autonomous targeting is effective for “massive offensive operations, where targets are not camouflaged,” he said, a description that may fit Iran’s fixed military installations, which are far less concealed than most positions on the Ukrainian front.

“We support the idea of using the human element less and less in the drone operator job,” Matviyuk said. “Autonomy, autonomous elements of drones — that’s the stuff we are working on.”

The problem, in his view, was not that the Pentagon used AI. It was that the data behind the target had not been updated since a girls’ school replaced a military headquarters on the same coordinates — and the people whose job it was to verify that data had already been cut from the chain.

AI systems are only as reliable as the people who build, feed and oversee them, Matviyuk emphasized.

When the human link fails, whether through bad data, gutted oversight or compressed timelines — the machine will continue to execute the error with precision.

Former CENTCOM director of intelligence, Lt. Gen. Karen Gibson, was unequivocal about where accountability for lethal strikes lies, regardless of weapon autonomy, at a Center for Strategic and International Studies panel last week.

“I will always come back to the fundamental principle of human responsibility and accountability,” she said. “A commander somewhere will ultimately be held responsible — not a machine or a software engineer.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleScientists discover mirror of our solar system in 2 exoplanets forming around a star
Next Article Meta Found Liable in New Mexico Suit for Failing to Protect Children
primereports
  • Website

Related Posts

Defense

Sovereign resilience starts in the north

April 18, 2026
Defense

Air Force unit executes test of Anduril’s semiautonomous combat drone

April 18, 2026
Defense

‘Angry Kitten’ EW Pod Tested on Search-and-Rescue HC-130

April 18, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Global Resources Outlook 2024 | UNEP

December 6, 20258 Views

The D Brief: DHS shutdown likely; US troops leave al-Tanf; CNO’s plea to industry; Crowded robot-boat market; And a bit more.

February 14, 20264 Views

German Chancellor Merz faces difficult mission to Israel – DW – 12/06/2025

December 6, 20254 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from FooBar about tech, design and biz.

PrimeReports.org
Independent global news, analysis & insights.

PrimeReports.org brings you in-depth coverage of geopolitics, markets, technology and risk – with context that helps you understand what really matters.

Editorially independent · Opinions are those of the authors and not investment advice.
Facebook X (Twitter) LinkedIn YouTube
Key Sections
  • World
  • Geopolitics
  • Popular Now
  • Artificial Intelligence
  • Cybersecurity
  • Crypto
All Categories
  • Artificial Intelligence
  • Climate Risks
  • Crypto
  • Cybersecurity
  • Defense
  • Economy
  • Geopolitics
  • Global Markets
  • Healthcare Innovation
  • Politics
  • Popular Now
  • Science
  • Technology
  • World
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • Disclaimer
  • Cookie Policy
  • DMCA / Copyright Notice
  • Editorial Policy

Sign up for Prime Reports Briefing – essential stories and analysis in your inbox.

By subscribing you agree to our Privacy Policy. You can opt out anytime.
Latest Stories
  • Tehran says ‘progress’ made in talks with US
  • Google and OpenAI are making a run at Claude’s desktop moat, and Anthropic is making it easy
  • DeBriefed 17 April 2026: Fossil-fuel power slumps | ‘Super’ El Niño warning | Afghanistan’s climate struggle
© 2026 PrimeReports.org. All rights reserved.
Privacy Terms Contact

Type above and press Enter to search. Press Esc to cancel.