Anthropic Files Dual Federal Lawsuits Challenging Pentagon's Supply Chain Risk Designation as Unconstitutional Retaliationtimeline_event

first-amendmentai-safetycorporate-punishmentnational-securitytech-regulationautonomous-weaponsdomestic-surveillanceai-policyjudicial-challengegovernment-retaliation
2026-03-09 · 2 min read · Edit on Pyrite

type: timeline_event

On March 9, 2026, Anthropic formally filed two federal lawsuits against the Trump administration challenging the Department of Defense's designation of the company as a "supply chain risk to national security" — a label historically applied only to foreign state-linked entities. The dual filing strategy was deliberate: one suit was lodged in the U.S. District Court for the Northern District of California, the other in the U.S. Court of Appeals for the District of Columbia, each attacking different legal dimensions of the Pentagon's actions.

The California complaint made the broader constitutional argument, asserting that the supply chain risk designation and the subsequent directive barring all federal agencies from using Anthropic's products constituted unlawful government retaliation against protected First Amendment speech. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the complaint stated. Anthropic's "protected speech" was its publicly stated position — backed by policy restrictions in its contracts — that Claude should not be used for domestic mass surveillance of American citizens or for fully autonomous lethal weapons systems with no human in the targeting loop. The D.C. filing sought narrower administrative law review of whether the Pentagon had exceeded the statutory scope of its supply chain risk authority by applying the designation to an American company over a commercial contract dispute.

The General Services Administration had separately terminated Anthropic's "OneGov" contract, a vehicle that had made Claude AI services available to all three branches of the federal government. Anthropic's CFO Krishna Rao stated in a related court filing that the cumulative government actions threatened to reduce Anthropic's 2026 revenue by multiple billions of dollars, as federal pressure was already chilling Anthropic's commercial contracts with private firms dependent on government subcontracting relationships.

In an unusual show of cross-competitor solidarity, dozens of researchers and scientists employed at OpenAI and Google DeepMind filed amicus briefs in their personal capacities supporting Anthropic's challenge — a signal that the industry broadly recognized the case's stakes extended beyond Anthropic alone. The implicit message was that any AI company that asserted safety limits on government use of its systems could be subjected to the same punitive designation.

The case represented the first time the federal judiciary would be asked to adjudicate whether the executive branch could compel a private AI company to strip its models of safety constraints as a condition of market access. Legal analysts assessed that Anthropic's arguments were strong on multiple independent grounds: procedural defects in the designation process, statutory overreach under the supply chain risk authority, and the constitutional unconstitutional-conditions doctrine. The resolution was expected to set binding precedent governing the relationship between AI safety policy, federal contracting power, and the First Amendment for years to come.