AI Geopolitics · · 7 min read

OpenAI Robotics Chief Resigns Over Pentagon Deal

Caitlin Kalinowski's departure marks the most visible protest yet against the AI company's military contract, escalating internal tensions over surveillance and autonomous weapons.

Caitlin Kalinowski, head of robotics at OpenAI, resigned Saturday over concerns the company failed to adequately deliberate its Pentagon agreement before allowing AI systems onto classified military networks.

Kalinowski joined OpenAI in November 2024 after nearly two and a half years at Meta leading the Orion augmented reality glasses project. Her departure from OpenAI, where the company has built a San Francisco Robotics lab employing about 100 data collectors training robotic arms for household tasks, occurred just days after OpenAI signed its defense contract.

Kalinowski stated that “Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” According to TechCrunch, she clarified that “the announcement was rushed without the guardrails defined.”

28 Feb 2026
Anthropic Designated Supply Chain Risk
Defense Secretary Pete Hegseth blacklisted Anthropic after the company refused to remove safeguards against mass surveillance and autonomous weapons use.
28 Feb 2026
OpenAI Signs Pentagon Deal
Hours after Anthropic’s designation, OpenAI announced agreement to deploy AI models on classified Defense Department networks.
3 Mar 2026
Contract Amendments
OpenAI revised contract language to explicitly prohibit domestic surveillance after internal and external backlash.
7 Mar 2026
Kalinowski Resigns
OpenAI’s robotics lead publicly announces departure, citing insufficient deliberation on surveillance and weapons guardrails.

The Contract Terms Anthropic Rejected

Negotiations between the Pentagon and Anthropic collapsed after the company pushed for strict limits on domestic surveillance and autonomous weapons, forcing President Trump to direct federal agencies to stop using its Claude AI. Anthropic wanted assurance that its models would not be used for fully autonomous weapons or mass surveillance of Americans, while the Pentagon wanted Anthropic to agree to let the military use the models across all lawful use cases.

OpenAI CEO Sam Altman signed a deal with the Pentagon within hours. According to CNBC, Altman admitted he “shouldn’t have rushed” the deal and that it “looked opportunistic and sloppy.” By Altman’s own admission, the deal was “definitely rushed” and “the optics don’t look good.”

Market Response
Claude US downloads (Feb MoM)+240%
ChatGPT uninstalls (post-deal)+300%
App Store rankingClaude overtook ChatGPT

The contract, with a $200 million ceiling, will help the Defense Department prototype how AI can transform administrative operations, according to OpenAI. But according to MIT Technology Review, Altman stated “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with.”

Internal Dissent and Employee Response

Kalinowski’s resignation follows an open letter signed by nearly 100 OpenAI employees expressing support for the same red lines Anthropic had fought for, according to OfficeChai. In public forums and private conversations, OpenAI employees are venting about how leadership handled the Pentagon negotiations, with many employees saying they “really respect” Anthropic for standing up to the Pentagon.

Context

OpenAI revised its usage policies in January 2024 to remove an explicit ban on military applications. The company previously stated its technology should not be used for “weapons development” or “military and warfare.” The policy now permits military use that doesn’t violate its prohibition on developing weapons or harming people.

Employees felt a contract of such importance and magnitude was rushed through, according to a current employee who spoke to CNN. After the deal announcement, protesters surrounded OpenAI’s San Francisco headquarters with chalk messages encouraging employees to remain skeptical of the company’s terms.

Contract language now states “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals” and the Department affirmed OpenAI’s services will not be used by Defense Department intelligence agencies like the NSA, according to OpenAI’s official statement. However, Electronic Frontier Foundation criticized the language, noting “deliberate” is a red flag given how often intelligence and law enforcement agencies rely on incidental or commercially purchased data to sidestep stronger privacy protections.

The Autonomous Weapons Question

Pentagon officials revealed the specific use cases that broke negotiations with Anthropic. Defense Undersecretary Emil Michael said he came to view Anthropic’s ethical restrictions as an irrational obstacle as the military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines, telling a podcast “I need a reliable, steady partner that gives me something, that’ll work with me on autonomous, because someday it’ll be real.”

“This is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome. I can’t predict for the next 20 years what are all the things we might use AI for.”

– Emil Michael, Pentagon Chief Technology Officer

The Pentagon began insisting AI companies allow for “all lawful use” of their technology, and Anthropic’s competitors – Google, OpenAI and Elon Musk’s xAI – agreed to the Pentagon’s terms, according to NewsNation. Spending on autonomous weapons is estimated to have hit almost $18 billion last year and is expected to keep growing rapidly.

OpenAI claims it will enforce its red line against fully autonomous weapons through “cloud-only deployment” and withholding models from “edge devices,” reasoning that if a weapon makes independent real-time battlefield decisions, the AI model must live within the weapon itself, but The Transformer reports Anthropic rejects this premise, noting modern battlefield drones can be orchestrated through mesh networks that include cloud data centers.

What to Watch

Analysts at Piper Sandler noted Anthropic is “heavily embedded in the Military and the Intelligence community” and moving off the technology could “pose some short-term disruptions,” warning that “onboarding and negotiating replacement technology will take time and resources.” Defense tech companies are telling employees to stop using Claude and switch to other AI models following the Pentagon’s ban.

Kalinowski’s replacement at OpenAI has not been announced. Anthropic CEO Dario Amodei is back at the negotiating table with the Pentagon, in talks with Under-Secretary Emil Michael in a last-ditch effort to reach an agreement, according to CNBC. Congress must now investigate the events that led to designating Anthropic a supply chain risk, with calls for Amodei, Defense Secretary Hegseth, Under-Secretary Michael, and OpenAI CEO Altman to testify.

The immediate question is whether other senior OpenAI employees will follow Kalinowski out. The longer one is whether contract language citing “lawful use” provides any meaningful constraint when surveillance law itself permits warrantless collection of commercially available data on Americans at scale.