A recent evaluation by Future of Life Institute (FLI) finds that major AI developers — including Anthropic, OpenAI, Meta, and xAI — are significantly falling short of emerging global safety benchmarks. The independent panel concluded that while companies race to build ever-more powerful AI systems, none have a robust, credible strategy in place to ensure safe control over advanced AI.
The report assessed firms across multiple key dimensions — including risk assessment, current harms, safety frameworks, existential-risk planning, governance, and transparency. While some companies earned credit for public safety-research commitments or partial transparency, the overall findings suggest widespread under-preparation, especially for long-term risks associated with future “superintelligent” AI.
Among the most serious concerns: none of the companies evaluated offers a clearly testable plan to guarantee human control over highly capable AI systems. In some cases their internal risk-tolerance thresholds are undefined, and mitigation strategies for potential catastrophic misuse or “loss of control” appear superficial or symbolic rather than concrete.
The findings have reignited warnings from experts. Some liken building powerful AI without robust safety protocols to launching a nuclear reactor with no safety checks — arguing that if development continues unchecked, the societal stakes could be enormous. Meanwhile, critics urge regulators and governments globally to impose standards, require third-party audits, and demand transparent, enforceable safety plans before more advanced AI systems are deployed.