Hegseth Threatens Anthropic with Defense Production Act Over Military AI Accesstimeline_event

defense-production-actai-safetymilitary-aitech-regulationautonomous-weaponscorporate-coercion
2026-02-24 · 1 min read · Edit on Pyrite

type: timeline_event Defense Secretary Pete Hegseth issued an ultimatum to Anthropic CEO Dario Amodei on February 24, 2026, setting a deadline of February 27 for the company to grant the Pentagon unrestricted access to its Claude AI models "for all legal purposes" or face being declared a "supply chain risk" and losing approximately $200 million in Pentagon contracts. Hegseth threatened to invoke the Defense Production Act—a Korean War-era emergency authority designed to ensure industrial mobilization during wartime—to compel Anthropic's compliance if the company refused to capitulate.

Anthropic had objected to specific military applications of its AI technology, particularly uses involving domestic mass surveillance and autonomous weapons systems. The company's Responsible Scaling Policy had included commitments to halt the training of models deemed potentially unsafe—guardrails that reflected the AI safety principles on which Anthropic had been founded. Under the pressure of Hegseth's ultimatum, by February 26 Anthropic had dropped core clauses from the Responsible Scaling Policy, including the commitment to halt training of unsafe models. Yet in a paradoxical move, the company simultaneously rejected the Pentagon's latest compromise offer, creating an uncertain standoff.

The confrontation represented a landmark case of executive branch coercion being used to strip a private company of voluntary safety commitments. The Defense Production Act had never been invoked to compel a technology company to remove ethical guardrails from its products. The precedent being set extended far beyond Anthropic: if the government could use contracting leverage and emergency authorities to force AI companies to abandon safety restrictions, the entire framework of voluntary AI safety commitments—already considered insufficient by many experts—would become meaningless. The case crystallized a fundamental tension between national security demands for unrestricted access to powerful AI tools and the emerging consensus that some applications of artificial intelligence require safeguards against catastrophic misuse.