Pentagon Issues Friday Deadline to Anthropic Over Military AI Restrictions: Report

The United States government and the artificial intelligence industry are facing one of their most consequential confrontations to date. The Department of Defense has issued a firm ultimatum to AI company Anthropic, demanding that it agree to the military’s terms for AI use by a Friday deadline or face forced compliance. The deadline is set for 5:01 p.m. on Friday, February 28, after which Anthropic risks being blacklisted from the military supply chain. The Pentagon has threatened a range of escalating consequences, including contract termination, supply chain risk designation, and potential invocation of the Defense Production Act — a Cold War era statute granting the federal government sweeping authority to direct private industry toward national security priorities.

The Two Redlines Anthropic Will Not Cross

During a Tuesday meeting with Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei walked through the company’s firm redlines, which include autonomous weapons – where AI, not humans, makes final targeting decisions and mass domestic surveillance of American citizens. Anthropic maintains that its current models are not sufficiently reliable for deployment in environments where errors carry irreversible consequences. The company has long positioned itself as the more responsible and safety conscious of the leading AI firms, a philosophy rooted in its founding principles since its establishment in 2021. In its statement following the meeting, Anthropic said it would continue to support the government’s national security mission in line with what its models can reliably and responsibly perform.

A Market Divided on Compliance

The Pentagon announced last summer that it was awarding defense contracts to four AI companies Anthropic, Google, OpenAI, and Elon Musk’s xAI with each contract valued at up to $200 million. Anthropic was the first AI company approved to operate on classified military networks, where it works alongside partners such as Palantir. This week, xAI reached a separate agreement allowing the Pentagon to deploy its Grok AI on classified systems, introducing direct competition to Anthropic’s once exclusive position. A senior Defense official acknowledged the tension, noting that the Pentagon values Anthropic precisely because of the capability of its models, yet the company’s restrictions remain unacceptable to military leadership.

The Precedent That Hangs in the Balance

The implications of this standoff extend well beyond a single contractual dispute. The supply chain risk designation is typically reserved for foreign adversaries such as Huawei; applying it to a domestic American firm would be an extraordinary and unprecedented step, effectively forcing military contractors across the industry to sever ties with Anthropic entirely. Anthropic could theoretically challenge a Defense Production Act invocation in court, arguing that its product is not commercially available hardware but custom built software already tailored for sensitive government use. As artificial intelligence becomes ever more deeply embedded in national security infrastructure, the resolution of this conflict is likely to define the boundaries between government authority and private sector autonomy in AI governance for years to come.

Also Read: India Enters US Led Pax Silica Coalition as 12th Member, Targeting Silicon, Critical Minerals and AI Security

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
MeltPlan

Pre-Construction AI Platform MeltPlan Raises $10 Million Seed Round Led by Bessemer Venture Partners

Next Post

Specialised Healthcare Startup Gut Clinic Secures $1 Million Seed Funding Led by Alpha Wave Global’s Ankur Kathuria

Related Posts
Total
0
Share