OpenAI has announced that it has reached an agreement with the U.S. Department of Defense (rebranded by the current administration as the “Department of War”) to deploy its artificial intelligence models on the military’s classified network. CEO Sam Altman said the deal reflects shared commitments to safety and responsible use, with OpenAI’s technology being integrated under strict controls and oversight rather than being embedded in autonomous weapons or unsupervised systems.
A key feature of the agreement is that it embeds ethical safeguards that OpenAI has championed, including bans on domestic mass surveillance and requirements that humans remain responsible for any use of force. According to the company, the Department of Defense agreed to respect these “red lines” and to incorporate them into policy and deployment practice. OpenAI will also retain control over how its models are used and will implement technical safety stacks to ensure the AI behaves as intended.
This development comes amid a sharp escalation in tensions between the Pentagon and Anthropic, a rival AI lab that previously held a significant defense contract. After months of negotiations over ethical restrictions — particularly regarding surveillance and autonomous weapon use — the Pentagon moved to end its relationship with Anthropic and ordered federal agencies to drop the company’s technology, calling it a national security supply-chain risk.
The rapid shift toward OpenAI’s involvement shows how AI safety and governance debates are intersecting with national security priorities. OpenAI’s willingness to bind its deployment to explicit ethical constraints reportedly made it a more acceptable partner for defense officials after Anthropic’s refusal to relax certain limits. At the same time, critics warn that broader questions about militarized AI and industry cooperation with government agencies are likely to intensify as more models are deployed for sensitive purposes.