A former policy chief from OpenAI has launched a new nonprofit institute focused on enhancing the safety and accountability of advanced artificial intelligence systems. The organisation aims to push for independent safety audits of so-called “frontier AI models” — the most powerful and capable AI systems — which currently lack consistent outside evaluation from neutral third parties. The initiative reflects growing concern among AI experts that self-regulation by developers may not be sufficient to ensure these systems are safe, reliable, and aligned with public interests.
The new institute plans to bring together researchers, policymakers, industry leaders, and civil society to develop frameworks for auditing AI systems before they are widely deployed. Independent audits would assess risks, test for unintended behaviours, and verify claims about performance and safety, providing transparency that goes beyond internal reviews conducted by AI labs themselves. Organisers argue that such external scrutiny could help build trust with the public and regulators without unduly stifling innovation.
Emphasising that powerful AI systems have broad societal impact, the institute’s founders argue that safety practices should be elevated to the same level as foundational research and product development. They believe that proactive evaluation and accountability mechanisms are essential to prevent harms — including biased outputs, misinformation, and operational failures — as increasingly capable models are integrated into critical systems.
The launch of this nonprofit highlights a broader trend in the AI community: experts who helped build leading models are now advocating for governance structures that balance rapid technological progress with public safety and ethical responsibility. Calling for independent safety audits signals a shift toward more collaborative oversight, where industry and external evaluators work together to address risks associated with frontier AI.