Pentagon Uses Banned Anthropic Claude to Select 1,000+ Iran Strike Targets While Simultaneously Blacklisting the Companytimeline_event

iran-warai-safetymilitary-aitech-regulationautonomous-weaponssurveillance-stateinstitutional-hypocrisy
2026-03-04 · 2 min read · Edit on Pyrite

type: timeline_event

Reporting published on March 4-5, 2026 revealed that the U.S. military had been using Anthropic's Claude AI — embedded in Palantir's Maven Smart System on classified networks — to generate and prioritize strike targets in the U.S.-Israeli military campaign against Iran that began on February 28, 2026. Claude generated approximately 1,000 prioritized targets in the first day of operations alone, synthesizing satellite imagery, signals intelligence, and surveillance feeds in real time to produce target lists with precise GPS coordinates, weapons recommendations, and automated legal justifications for individual strikes.

The revelation created a profound institutional contradiction: the Trump administration had signed an executive order directing federal agencies to phase out Claude just hours before the Iran campaign began, and the Department of Defense formally designated Anthropic a national security supply-chain threat on March 4. Yet U.S. military commanders told reporters they would continue relying on Anthropic's technology throughout the Iran campaign regardless of the president's order, because no viable replacement had yet been fielded.

The use of Claude for target selection in active combat operations did not technically violate the specific safety conditions Anthropic had sought to protect in its contract negotiations — those conditions concerned domestic mass surveillance of Americans and fully autonomous lethal weapons without human authorization, not AI-assisted target identification with human commanders nominally in the decision loop. However, the episode demonstrated that the threshold between "AI-assisted" and "AI-determined" warfare had effectively collapsed in operational practice, with AI systems generating target lists at a pace and scale that made meaningful human review of individual strike decisions a largely procedural formality.

The disclosure that Claude selected more than 1,000 targets in a single day illuminated the practical gap between the policy debates occurring in Washington — framed around contractual red lines and supply-chain designations — and the actual operational deployment of AI in kinetic military action. Military sources confirmed that the Pentagon's dependence on Anthropic's technology had grown sufficiently deep during prior integrations that a six-month phase-out window was optimistic, and that operational commanders were not prepared to degrade their targeting capabilities in the midst of an active air campaign.

The episode marked the first publicly confirmed large-scale use of commercial large language model AI in active target selection during combat operations, and raised fundamental questions about accountability, legal authorization, and the feasibility of any meaningful corporate or contractual guardrail on AI deployed within classified military systems.