The debate over superhuman AI is heating up, with firms and researchers sharply divided on the issue. On one hand, leaders of major AI companies like OpenAI's Sam Altman are hyping the imminent arrival of "strong" computer intelligence that will surpass human capabilities.
However, many researchers in the field are skeptical, viewing these claims as marketing spin rather than a genuine prediction of imminent breakthroughs. They argue that current machine-learning techniques, while impressive, are still far from achieving true human-like intelligence.
The stakes are high, with some predictions ranging from machine-delivered hyperabundance to human extinction. As the debate rages on, one thing is clear: the development of superhuman AI, if it happens, will have far-reaching consequences for humanity.