Table of Contents Show
The United States Department of Defense made two consequential decisions within hours of each other this week, significantly redrawing the lines of artificial intelligence engagement in national security. It first designated American AI company Anthropic a “Supply Chain Risk to National Security,” then signed a classified AI deployment agreement with rival OpenAI accepting from the latter the very safety conditions it had declined to accommodate from the former.
The Collapse of Anthropic’s Pentagon Partnership
Anthropic had occupied a pioneering position in government AI deployment, being the first laboratory to integrate its models across the Department of Defense’s classified network. Negotiations over the ongoing terms of its contract, however, ultimately broke down. Anthropic sought formal assurances that its models would not be used for fully autonomous weapons systems or the mass surveillance of American citizens. The Pentagon, seeking broader operational flexibility across all lawful use cases, declined to commit to those restrictions.
Defense Secretary Pete Hegseth subsequently designated Anthropic a supply chain risk to national security – a label typically applied to foreign adversaries, not domestic technology firms. The designation requires all DoD vendors and contractors to certify they do not use Anthropic’s models. President Donald Trump further directed every federal agency to immediately cease all use of Anthropic’s technology. The company described itself as “deeply saddened” by the decision and announced its intention to challenge the designation in court.
OpenAI Steps In
Hours later, OpenAI CEO Sam Altman announced on X that his company had reached an agreement with the Department of Defense to deploy its models within the Pentagon’s classified network. Notably, Altman had told OpenAI employees in an internal memo just one day earlier that his company shared the same red lines as Anthropic. The DoD nonetheless accepted OpenAI’s terms – prohibitions on domestic mass surveillance and human accountability in decisions involving the use of force the same principles it had declined to formalise with Anthropic. Altman also publicly called on the Pentagon to extend identical terms to all AI companies.
An Unanswered Question
Why were identical safety conditions acceptable from OpenAI but not from Anthropic? Government officials had for months privately criticised Anthropic for being excessively focused on AI safety, suggesting the dispute carried a political dimension beyond the contractual disagreement. As AI becomes increasingly embedded in national security infrastructure, the terms governing that relationship will carry implications well beyond any individual contract.