The possibility of AI ending human control by 2100 is a topic of ongoing debate among experts. Some researchers and intellectuals warn about extreme risks from artificial intelligence, while others are more optimistic. AI experts estimate a 3% chance of AI causing human extinction by 2100, while "superforecasters" put the odds at 0.38%. A survey of 3000 AI experts implied a significant probability of human-level machine intelligence by the mid-21st century.
Rapid advances in AI capability and current systems' capability levels suggest that AI could lead to permanent disempowerment of humanity by 2100. Some researchers argue that AI systems could pose risks by seeking and gaining power, potentially leading to human extinction or disempowerment.
However, not all experts share the same level of concern. A RAND Corporation study found it difficult to describe a scenario where AI conclusively poses an extinction threat to humanity, particularly with nuclear war and biological pathogens. Humans are too adaptable, plentiful, and dispersed for AI to wipe out using current tools.
To mitigate AI-related risks, technical research on safe AI systems, strategy research, and policy development can be crucial. Some organizations, like DeepMind, Anthropic, and OpenAI, have dedicated teams focused on technical AI safety research. As AI continues to evolve, it is essential to carefully consider the potential risks and implications of its development and deployment.