The U.S. Department of Defense is in a deepening conflict with Anthropic, a leading AI company known for its safety-focused Claude models, over how artificial intelligence should be used in military operations. Pentagon officials are pressing Anthropic to allow its AI systems to be used for “all lawful purposes,” including incorporation into weapons systems and battlefield applications, as part of broader efforts to integrate advanced AI into national defense strategy. Anthropic, however, has resisted lifting ethical limits on certain high-risk applications — particularly fully autonomous weapons and mass domestic surveillance — arguing that such restrictions are essential to safe and responsible AI development.
At the heart of the dispute is Anthropic’s refusal to fully comply with Pentagon demands, which has prompted the Defense Department to consider designating the company as a “supply chain risk.” Such a designation is typically used for foreign adversaries and would force any contractor doing business with the U.S. military to sever ties with Anthropic, dealing a potentially devastating blow to the company’s revenue and partnerships. The standoff highlights how ethical principles and national security priorities are colliding in the rapidly evolving AI landscape.
The disagreement comes on the heels of Anthropic’s $200 million Pentagon contract awarded in 2025 — part of a group of major AI firms initially selected to support defense AI initiatives — but now reportedly under review due to the stalled negotiations. Other major AI developers, such as OpenAI and Google, have shown more willingness to grant broader access for military use, leaving Anthropic as the most significant holdout amid growing pressure from Defense Secretary Pete Hegseth to ensure AI tools can support all legal military needs without usage limits.
Industry experts say the outcome of this dispute could shape how AI safety principles are balanced against military requirements going forward. Some warn that forcing AI companies to drop ethical guardrails could reduce private-sector leverage over how powerful AI tools are deployed, while others argue that the Pentagon needs unfettered access for national defense effectiveness. The confrontation underscores the broader tension between innovation with safeguards and government demands for maximum operational flexibility in AI technologies.