type: timeline_event
On March 18, 2026, reporting from Breaking Defense and Reuters revealed that military personnel and defense contractors who relied on Anthropic's Claude in daily operations were pushing back sharply against Pentagon CTO Emil Michael's assertion that the transition to replacement AI systems could be completed within months. Military users with direct experience operating Claude in classified environments said that recertifying replacement models for the security requirements, workflow integrations, and operational reliability standards of classified systems could realistically take 12 to 18 months.
The resistance ran deeper than technical timelines. Claude had become deeply embedded in classified workflows across multiple defense and intelligence agencies over the preceding two years, with custom fine-tuning and security certifications that represented months of specialized work. Palantir's Maven Smart System — the Pentagon's flagship AI targeting platform — had built custom workflows on Claude Code that would need to be entirely rebuilt and recertified with a different underlying model. Defense contractors who had integrated Claude into their products faced the prospect of expensive and time-consuming re-engineering with no guarantee that replacement models would perform equivalently in the specialized tasks for which Claude had been optimized.
Perhaps most notably, Breaking Defense reported that Pentagon IT staff were "slow-rolling" the transition, implementing the directive at a pace that suggested they hoped for a reconciliation between the Pentagon and Anthropic before the transition became irreversible. The passive resistance reflected a pragmatic calculation among career defense technologists who viewed the blacklist as a political decision that would degrade operational capability. Some IT personnel privately expressed concern that rushing to deploy less-tested models in classified environments could introduce security vulnerabilities and reliability failures, creating the very supply chain risk the designation was ostensibly meant to address. The situation illustrated the gap between political leadership's desire for swift action and the operational reality of replacing deeply integrated AI infrastructure.