OpenAI Robotics Chief Caitlin Kalinowski Resigns Over Pentagon Deal, Citing Surveillance and Autonomous Weapons Concernstimeline_event

corporate-accountabilityai-safetymilitary-aitech-regulationautonomous-weaponsdomestic-surveillanceemployee-dissent
2026-03-07 · 2 min read · Edit on Pyrite

type: timeline_event

Caitlin Kalinowski, who had led hardware and robotics operations at OpenAI since November 2024, resigned on March 7, 2026, citing the company's Pentagon contract as the direct cause of her departure. Her resignation was the highest-profile employee departure resulting from the deal and crystallized a deep internal fracture at OpenAI over the ethics and governance of military AI deployment.

In her resignation statement, Kalinowski wrote that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." She described the announcement as having been "rushed without the guardrails defined" and characterized the governance failure as "a concern first and foremost," signaling that her objection was not only to the substance of the deal but to the process by which it was made — without meaningful internal deliberation, ethics review, or employee input.

Her departure came amid a broader wave of internal discontent. OpenAI employees had been publicly and privately expressing frustration with CEO Sam Altman's handling of the Pentagon negotiations since the deal's announcement. Employees who had worked alongside colleagues at Anthropic expressed that they "really respected" Anthropic's willingness to hold its safety lines even at the cost of federal contracts. An open letter signed by over 900 employees from OpenAI and Google demanded that their employers reject all Pentagon surveillance contracts. At an all-hands meeting, Altman reiterated that the rollout had been a "mistake" and that he was urging the government to drop Anthropic's supply-chain risk designation.

Kalinowski's resignation drew particular attention because she had previously worked at Meta on virtual reality hardware before joining OpenAI's push into physical AI systems and robotics — a domain where the intersection of AI, hardware, and weapons systems is most acute. Her departure represented not only a loss of technical leadership but a public signal that OpenAI's stated safety commitments were insufficient to retain senior employees who had joined the company on the assumption those commitments were meaningful.

The episode deepened questions about whether voluntary corporate safety policies could function as any real constraint on AI deployment when companies faced competitive pressure and government leverage. Within the span of ten days, the AI industry had witnessed: a company (Anthropic) blacklisted for asserting safety limits, another company (OpenAI) rush to fill the gap and then revise its deal under backlash, and now that company's own safety-oriented leadership departing in protest. The pattern suggested that market and political dynamics were systematically eroding the space for AI companies to maintain meaningful safety standards in the national security context.