A report highlighted that the U.S. National Security Agency (NSA) is reportedly using Anthropic’s advanced AI model, Mythos Preview, even amid ongoing tensions between the company and the U.S. Department of Defense. According to sources, the NSA is one of roughly 40 organizations given access to this powerful system, and its use may already be expanding within the agency.
What makes this development notable is the contradiction in government policy. Earlier, the Pentagon had labeled Anthropic a “supply chain risk” and sought to restrict its use across federal agencies. Despite this, the NSA—part of the same defense ecosystem—appears to be actively using the model, suggesting that security needs may be outweighing policy disagreements.
The reason behind this interest lies in Mythos’s capabilities. The model is specifically designed for advanced cybersecurity tasks, such as identifying vulnerabilities in software systems. Organizations with access are reportedly using it to scan for weaknesses and improve defenses, though experts warn the same capabilities could also be used offensively if misused.
Overall, the situation highlights a broader tension in the AI era: governments are racing to adopt powerful AI tools for security and strategic advantage, even while grappling with the risks those same tools introduce. The NSA’s reported use of Mythos underscores how critical—and controversial—advanced AI systems have become in national security.