The White House has reportedly opposed Anthropic’s proposal to expand access to its advanced AI model, Mythos, to roughly 70 additional organizations. The move would have increased the total number of entities using the cybersecurity-focused model to around 120. According to reports, administration officials raised concerns about both national security risks and the company’s ability to provide enough computing power without affecting government access to the system.
Mythos is considered one of the most powerful cybersecurity-oriented AI systems developed so far. The model is capable of autonomously identifying and exploiting software vulnerabilities, which has alarmed both government agencies and private companies. Anthropic initially limited access to a small group of critical infrastructure organizations and government users, including agencies connected to national security operations. The White House fears that broader rollout could increase the risk of misuse or unauthorized access to a highly sensitive tool.
The situation is especially complicated because the administration appears divided on Anthropic itself. While some officials are resisting Mythos expansion, the White House is simultaneously exploring executive guidance that could allow federal agencies to continue working with Anthropic despite earlier Pentagon objections. Tensions between Anthropic and the U.S. government escalated after disputes over military uses of AI, including disagreements about autonomous weapons and domestic surveillance restrictions.
The debate highlights a broader issue facing the AI industry: how to manage frontier AI systems that have dual-use capabilities. Models like Mythos can strengthen cybersecurity defenses, but they can also potentially automate sophisticated cyberattacks. As governments and AI companies race to deploy increasingly powerful systems, questions around access control, infrastructure limits, and national security are becoming central to the future of advanced AI governance.