The collapse of talks between Anthropic and the Trump administration, with the AI giant wanting guarantees that its AI would not be used for mass surveillance or fully autonomous weapons that can kill without human control.
OpenAI CEO, Sam Altman, wrote on X: “We reached an agreement with the Department of War (DoW) to deploy our models in their classified network.
“In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
He added: “AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including the use of autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Altman added the company will also build technical safeguards to ensure its models behave as “they should”, with the DoW also wanting to ensure safety.
He stated: “We are asking the DoW to offer these same terms to all AI companies, which in our opinion, we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
“We remain committed to serving all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.”
The Pentagon had pressured Anthropic to weaken its ethical rules or face serious consequences, according to reports.
Trump wrote on Truth Social: “The Left-wing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the [Pentagon], and force them to obey their Terms of Service instead of our Constitution.”
Altman tried to reassure OpenAI staff in a memo reported by Axios.
“Regardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance,” Altman wrote.
“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
Altman added: “We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”
“No amount of intimidation or punishment from the [Pentagon] will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic said.
“We have tried in good faith to reach an agreement with the [Pentagon], making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above,” the company added.
“To the best of our knowledge, these exceptions have not affected a single government mission to date.”
RELATED STORIES
OpenAI CEO: AI not to blame for layoffs, blasts ‘insane’ data centre water concerns
OpenAI locks in London as its largest hub outside the U.S.
Subsea cable investment hits new heights as AI drives demand for ocean bandwidth

ITW 2026
Over 2000 organisations from 120 countries made their mark at ITW 2025, powering the future of global connectivity and digital infrastructure.





