President Donald Trump ordered federal agencies to stop using AI tools from the San Francisco-based company Anthropic, the U.S. military reportedly deployed Anthropic’s AI system — known as Claude — in a major strike against Iran. This was reported by Engadget and The Wall Street Journal, and has also been covered by other outlets.
According to sources familiar with the situation, the U.S. Central Command (CENTCOM) continued to use Claude in the operation, including for intelligence analysis, target identification and battlefield simulations, despite Trump’s announcement that the AI lab had been designated a “supply-chain risk” and ordered out of government use.
The strike, part of a larger joint military action alongside Israeli forces against key Iranian targets, took place on February 28, 2026 and has been described as one of the most significant escalations in U.S.–Iran tensions in years. Even as Trump publicly declared a ban on Anthropic’s tools, officials indicated that a formal phase-out period of up to six months was in effect for defence systems, meaning Claude remained embedded in military workflows.
The incident highlights broader tensions between military reliance on advanced AI and political efforts to regulate or restrict its use in sensitive national security applications. In response to the dispute with Anthropic, the U.S. government moved quickly to reach a separate agreement with another AI lab — OpenAI — to supply AI tools for classified defence work going forward.