Artificial intelligence is increasingly being used in elections worldwide, raising concerns about its impact on democracy. In 2024, numerous countries held elections, and AI tools were employed in various ways, including generating deepfakes, writing speeches, and developing campaign strategies. For instance, in India, candidates used deepfake audio and video to personalize outreach to voters, while in Indonesia, candidates paid for a service that used ChatGPT to write speeches and develop campaign strategies.
The use of AI in elections has sparked concerns about the spread of misinformation and disinformation. AI-generated content can be used to sway voters and undermine trust in the electoral process. In Bangladesh, fake videos targeted the opposition party, highlighting the potential for foreign actors to interfere in elections. Women candidates and minority groups may be disproportionately affected by AI-generated content, potentially preventing them from participating in politics.
Regulatory efforts are underway to address these challenges. The European Union has adopted a comprehensive AI regulation that includes provisions to counter the spread of disinformation and misleading content. Twenty-seven technology companies signed the AI Elections Accord, committing to address deceptive AI election content through improved detection and provenance.
However, policymakers face a daunting task in balancing regulation and innovation. Effective collaboration between governments, tech companies, and civil society is crucial to addressing AI-related challenges. As the use of AI in elections continues to evolve, it remains to be seen whether democracy can keep up with the pace of technological change and mitigate the risks associated with AI-generated content.