The United States government is preparing strict new rules for artificial intelligence companies that want to work with federal agencies. The proposed guidelines would require AI developers to allow the government to use their models for “any lawful purpose.” The move comes as Washington tries to tighten control over how AI technology is used in government operations and national-security projects.
The policy emerged after a dispute between the Pentagon and AI company Anthropic, the developer of the Claude AI system. Anthropic refused to give the U.S. military unrestricted rights to use its AI, especially for mass surveillance and fully autonomous weapons, citing ethical concerns. The disagreement escalated to the point where the Pentagon labeled Anthropic a “supply-chain risk” and blocked the company from certain government contracts.
Under the proposed rules, any AI firm seeking U.S. government contracts would have to grant an irrevocable license for government use of its technology, provided the use is legal. The draft guidelines also require AI systems to avoid partisan or ideological bias and force companies to disclose if they modify their models to comply with foreign regulations such as the EU’s digital rules.
The conflict highlights growing tensions between AI companies focused on safety and governments prioritizing national security. While firms like Anthropic want limits on how their technology is used, U.S. officials argue that contractors cannot dictate how government agencies employ tools purchased with public funds. The final rules are still being reviewed, but they could significantly reshape how AI companies collaborate with the U.S. government in the future.