type: timeline_event
On the evening of February 27, 2026—within hours of the Trump administration formally blacklisting Anthropic—OpenAI CEO Sam Altman announced on X that his company had "reached an agreement with the Department of War to deploy our models in their classified network." The timing provoked immediate and severe backlash from AI researchers, ethicists, and tech policy analysts who characterized the announcement as opportunistic, particularly because Altman had that same day publicly expressed support for Anthropic's position.
OpenAI's deal agreement, published on the company's website, claimed to include the same two core restrictions Anthropic had fought for: no use of OpenAI technology for mass domestic surveillance, and no use of OpenAI technology to direct fully autonomous weapons systems. Altman argued that OpenAI had achieved what Anthropic could not by relying on citations to applicable U.S. law rather than explicit contractual prohibitions—noting that "Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with."
Critics immediately identified this framing as the source of the problem rather than the solution. The surveillance loophole concern centered on the fact that many forms of mass data collection and analysis on American citizens are technically lawful under current U.S. statutes, including bulk purchase of commercially available data from data brokers and certain intelligence activities authorized under Executive Order 12333. A contract provision that prohibited only "unlawful" surveillance would therefore permit wide-ranging monitoring that many Americans would consider surveillance in any meaningful sense of the word.
The public announcement drew a rare admission from Altman, who within days acknowledged he had made a mistake: "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." Critics within the AI safety community pointed out that OpenAI had effectively demonstrated that the government's coercive strategy worked—that threatening one AI company with blacklisting was sufficient to bring its chief competitor to the table on terms the first company had refused to accept. The deal marked a significant erosion of the norms around voluntary AI safety commitments in the defense context.