Treasury, State, and HHS Begin Phasing Out Anthropic Products Following Trump Executive Directivetimeline_event

institutional-captureexecutive-powercorporate-punishmenttech-regulationai-regulation
2026-03-04 · 2 min read · Edit on Pyrite

type: timeline_event

Following President Trump's executive directive accompanying the Department of Defense's national security supply-chain designation of Anthropic, multiple civilian federal agencies began formally announcing plans to phase out Anthropic products in the first days of March 2026. The Departments of Treasury, State, and Health and Human Services all confirmed they would stop using Anthropic's Claude AI models, with the administration's directive providing a six-month transition window to complete the transition.

The scale of Anthropic's civilian government footprint made the phase-out consequential beyond defense contracting. Federal agencies had integrated Claude into a range of internal workflows including document summarization, policy research, legal review assistance, and data analysis. The Treasury Department's planned phase-out was notable given that it had simultaneously experienced a 24% reduction in workforce under DOGE — meaning the department was being asked to do more with fewer staff while simultaneously losing one of the AI tools it had incorporated to compensate for reduced human capacity.

The Electronic Frontier Foundation characterized the episode as exposing a fundamental structural vulnerability in the use of AI in government: when privacy protections and safety constraints on AI systems depend not on law or regulation but on voluntary decisions by private companies, those protections are only as durable as the political and commercial environment in which the companies operate. The EFF noted that Anthropic's refusal to remove its constraints — and the resulting blacklisting — demonstrated that the government could use procurement power to punish companies that maintained safety and privacy protections, creating a systematic incentive for AI companies to preemptively abandon such protections to remain in federal markets.

The Center for American Progress called on Congress to act, arguing that the episode demonstrated the inadequacy of leaving the governance of AI in sensitive government applications entirely to executive discretion and individual corporate policy decisions. Without legislation establishing baseline requirements for AI safety testing, use restrictions, and procurement conditions, the terms on which AI systems were deployed in federal agencies would continue to be determined by whichever company was willing to offer the fewest constraints at the best price — a race to the bottom that the Anthropic-DoD dispute had made visible.