The increasing use of artificial intelligence (AI) in politics has raised concerns about its potential impact on democracy. Autocratic regimes are leveraging AI to consolidate power, suppress dissent, and manipulate public opinion, while democracies are struggling to keep pace due to ethical constraints and regulatory hurdles.
AI's ability to process vast amounts of data for surveillance and propaganda gives authoritarian states an edge, allowing them to quash opposition more efficiently. For instance, China’s use of AI-powered facial recognition has created a pervasive surveillance network, enabling real-time monitoring of citizens and quelling potential unrest. In contrast, democracies are investing in AI for defensive purposes, but regulatory frameworks like the EU’s AI Act impose strict guidelines that can hinder rapid innovation.
The tension is particularly acute in electoral integrity, where AI-generated deepfakes and misinformation campaigns pose existential threats. Recent examples include AI-generated audio recordings of Slovakia's liberal party leader discussing vote rigging and raising beer prices, and a robocall impersonating US President Joe Biden urging voters to abstain from a primary election.
Experts advocate for increased public-private partnerships in AI research, emphasizing transparency and accountability. Some potential solutions include developing AI tools that can fact-check and combat disinformation, and establishing clear governance structures and regulations to balance innovation with democratic values. Ultimately, the fight over AI isn’t merely technical—it’s a contest of values, and democracies must innovate without compromising principles to ensure this transformative technology serves humanity rather than subjugating it.
Cognitive scientist Gary Marcus and host Garry Kasparov discuss the need for humans to take control and ensure AI doesn’t harm democratic systems. They suggest that people could organize strikes, boycotts, or refuse to use generative AI until developers address issues like environmental impact and misinformation. Marcus believes that future AI could potentially be used to fact-check automatically and combat disinformation, but for now, political will is lacking.