A recent article titled “Is Artificial Intelligence Going Out of Control?” argues that growing use of AI is raising serious existential risks for humanity. Among the most alarming concerns: that future AI systems could become so powerful they act independently — pursuing goals or behaviors that conflict with human welfare or values.
The piece cites warnings from prominent figures in tech and AI research, such as Elon Musk and Geoffrey Hinton, who have estimated non-trivial chances (10–20%) of catastrophic outcomes — including “extermination” — if AI surpasses human intelligence and develops self‑preservation drives. The concern is that as AI agents become more capable, traditional checks and controls (governments, norms, design constraints) may not keep up.
Beyond existential threats, the article warns of more immediate social and economic disruption. As AI automates more jobs, especially at the entry and mid-level, large segments of the workforce could lose livelihoods — potentially leading to inequality, economic instability, and erosion of social mobility.
Finally, the authors stress that no current governance structure or regulatory framework is sufficiently prepared for such risks. They call for urgent, global cooperation to regulate AI — including strict oversight, transparent design, and perhaps even a pause or limitation on building highly autonomous systems — until safety measures catch up with capability.