DOD Files 40-Page Rebuttal Claiming Anthropic Might "Disable Its Technology" During Warfighting Operations Over Corporate Red Linestimeline_event

ai-safetymilitary-aiautonomous-weaponsrule-of-lawanthropic-blacklist
2026-03-18 · 1 min read · Edit on Pyrite

type: timeline_event

On March 18, 2026, the Department of Defense filed a 40-page response in the Northern District of California — its first substantive rebuttal in the Anthropic litigation — arguing that the supply chain risk designation was justified because Anthropic might "attempt to disable its technology" during "warfighting operations" if the company "feels its corporate 'red lines' are being crossed." The filing framed Anthropic's AI safety commitments not as responsible engineering practices but as a national security threat, suggesting that a company with principled limitations on its technology's use was inherently unreliable as a defense supplier.

The DOD's argument represented a remarkable legal position: that a private company's stated ethical commitments constituted grounds for the most severe procurement sanction available. The filing implied that any AI company maintaining safety red lines — restrictions on autonomous targeting, limits on surveillance applications, requirements for human oversight — could be considered a supply chain risk by virtue of those commitments. First Amendment lawyer Chris Mattei, who had been following the case, told reporters that "no investigation" supported the DOD's claims, noting that Anthropic had never threatened to disable any system and that the Pentagon's argument appeared to be entirely speculative.

Legal analysts observed that the DOD's filing revealed the administration's broader theory of the case: that military AI systems must be provided by companies willing to cede all control to the government, with no contractual limitations on use cases, no safety restrictions, and no ability to withdraw service regardless of how the technology was deployed. The implication was that the Pentagon sought AI vendors who would function as pure utilities, providing capability without accountability. Anthropic's legal team was expected to address the filing in detail before the March 24 hearing, and the case was being closely watched as a potential precedent for the government's power to punish companies for maintaining ethical standards.