The United States government is significantly expanding its roster of artificial intelligence suppliers for defence, signaling a strategic shift toward diversification and reduced dependence on any single vendor. According to the report, the Pentagon has added companies like Microsoft, Amazon, Nvidia, and Reflection AI to join existing partners such as OpenAI, Google, and xAI. These firms are now authorized to deploy AI tools across classified military operations, including highly sensitive environments.
A key reason for this expansion is to build flexibility and avoid “vendor lock-in.” By working with multiple AI providers, the US military aims to ensure continuity even if one company changes its policies or withdraws support. The new AI systems are expected to enhance capabilities such as data analysis, situational awareness, and decision-making in complex operations, contributing to what officials describe as an “AI-first fighting force.”
At the center of the controversy is Anthropic, whose relationship with the US government has deteriorated. The conflict stems from disagreements over how its AI systems—particularly Claude—can be used. Anthropic opposed allowing its technology for domestic surveillance or fully autonomous weapons, while the Pentagon pushed for broader, unrestricted use. This led to the cancellation of a major contract and the company being labeled a “supply chain risk,” an unprecedented move against a US-based AI firm.
Despite the fallout, Anthropic has not been completely sidelined. Its AI tools are still reportedly in use within certain intelligence and cybersecurity contexts, and there are signs the US administration may be trying to repair the relationship. Overall, the situation reflects a broader tension in the AI industry—balancing national security priorities with ethical constraints—while governments race to integrate advanced AI into defence systems.