The Pentagon’s Battle With Anthropic Is Really a War Over Who Controls AI

The Pentagon’s Battle With Anthropic Is Really a War Over Who Controls AI

The U.S. Department of Defense, led by Defense Secretary Pete Hegseth, has entered a high-stakes standoff with Anthropic, the AI lab behind the Claude artificial intelligence system. What began as a cooperative relationship — with Anthropic under a roughly $200 million contract supplying AI for defense purposes — has quickly escalated into a confrontation over how far the military should be able to use and control advanced AI. The Pentagon is demanding that Anthropic give it unrestricted access to Claude for “all lawful purposes,” a phrase that would strip away many of the ethical guardrails the company has put in place.

Anthropic has refused to acquiesce, saying it “cannot in good conscience” allow its AI to be used in ways that could enable fully autonomous weapons or mass domestic surveillance, two applications the company considers dangerous and unethical. CEO Dario Amodei argues that such uses exceed the capabilities and reliable safety limits of current AI systems, and that permitting them would undermine democratic values and public trust in the technology. These “red lines” have put Anthropic in direct conflict with the Pentagon’s push for AI tools that can serve a wide range of defense needs without restrictions.

The dispute has grown more intense because of the Pentagon’s threats of drastic consequences if Anthropic does not comply. Hegseth has warned that the department could cancel its contract, designate Anthropic a “supply chain risk” — a label usually reserved for adversarial foreign firms — or even invoke the Defense Production Act to compel the company to open its systems to full military use. Such actions could severely jeopardize Anthropic’s future military business and broader industry relationships, potentially chilling private AI companies’ ability to set their own safety and ethical standards.

Critics of the Pentagon’s approach have raised concerns about the broader implications of this showdown for AI governance and civil liberties. Some lawmakers have condemned the heavy-handed tactics as unprofessional or bullying, while policy experts warn that allowing the military to dictate the terms of private companies’ AI safety safeguards could set a precedent that erodes public oversight and ethical constraints. As negotiations continue under looming deadlines, the outcome of this conflict could shape how AI is controlled and regulated in defense and civilian contexts for years to come.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.