An opinion piece in the Buenos Aires Times argues that society is standing at a crossroads between two dramatically different futures shaped by artificial intelligence — one of utopian abundance and one of perilous risk. The author notes that some leading AI thinkers warn that highly advanced systems could threaten human civilisation if not properly aligned, because they learn from vast historical records that include humanity’s worst impulses. This raises questions about how to ensure AI behaves benevolently rather than malevolently.
The essay frames AI as the latest in a long list of perceived threats that have preoccupied humanity, comparing today’s fears with past anxieties about overpopulation and nuclear conflict. The piece suggests that worries about AI’s potential to self-improve or act unpredictably aren’t entirely irrational given the broader context of existential risks society has faced. However, it also implies that these fears can overshadow more grounded discussions about how to govern and shape AI’s development responsibly.
A central theme is the lack of consensus on what AI’s future holds. While some technologists predict a science-fiction-like world of leisure and abundance driven by automation, others emphasise dangers — including loss of human control, unintended consequences, and the difficulty of ensuring powerful systems act in humanity’s best interest. The piece questions whether humans can truly control a system with intelligence far beyond their own, a concern that echoes broader debates about existential risk and alignment.
Ultimately, the article urges a balanced perspective: acknowledging both the transformative potential of AI and the seriousness of its risks. It suggests that like historical threats, AI requires thoughtful public and policy engagement rather than panic or blind optimism, pointing to the need for meaningful debate, governance, and ethical consideration as the technology evolves.