Former OpenAI employees have raised serious concerns about the company's priorities, claiming that profit is being put ahead of AI safety. The controversy surrounds OpenAI's potential shift away from its original nonprofit mission and profit caps, which were designed to ensure the benefits of AI advancements would serve humanity, not just investors.
The former employees, including Carroll Wainwright, Jan Leike, and Ilya Sutskever, have expressed concerns that OpenAI is compromising on safety to drive profits. Leike, who led the Super Alignment team, stated that the company is not taking AI safety seriously enough. Critics argue that OpenAI's leadership, particularly CEO Sam Altman, lacks transparency and accountability. Former board member Tasha McCauley believes Altman's behavior "should be unacceptable" when dealing with high-stakes AI safety issues.
The former employees have highlighted weaknesses in OpenAI's safety protocols. For instance, William Saunders revealed that security was so weak that hundreds of engineers could have stolen the company's most advanced AI models, including GPT-4. The former employees are calling for significant changes, including restoring the nonprofit structure's power, independent oversight, whistleblower protection, and profit caps.
They want OpenAI to prioritize AI safety and ensure that the benefits of AI advancements serve humanity, not just investors. The controversy surrounding OpenAI's priorities raises important questions about the ethics of AI development and the need for greater transparency and accountability in the industry.