The recent report by International IDEA reviewed findings from five workshops held across several countries — involving Electoral Management Bodies (EMBs) and civil‑society organizations — to examine how artificial intelligence is being integrated into electoral processes worldwide. The work shows that AI is already being used beyond sensational headlines: it’s entering the everyday machinery of elections, including administration, data analysis, information provision, and oversight.
One important insight is that many electoral authorities admit their internal understanding of AI remains weak. Roughly half of the participants in the workshops rated their AI literacy as low — yet about a third reported already using AI tools to support election‑related processes. This creates a tension: rapid technological uptake without commensurate readiness may expose electoral systems to serious risks.
The dual nature of AI also emerged clearly from the discussions: on one hand, AI’s potential to improve electoral management — for example, through voter‑list cleaning, administrative efficiency, or voter information services — is real and promising. On the other, generative AI and other tools are being used in campaigns for micro‑targeting, chatbots, synthetic media (like deepfakes), and content amplification — sometimes to mislead, spread disinformation, or manipulate public opinion. That raises threats to democratic legitimacy, information integrity, and fair competition.
Finally, the report argues that adopting AI in elections must go hand in hand with building strong governance frameworks. The workshops proposed a “democratic AI foundation” built on five pillars: AI literacy, ethics & human rights, content curation & moderation, regulation & legislation, and use of AI for improving electoral management. According to IDEA, only when these conditions — human oversight, transparency, readiness, regulation, and accountability — are met, can AI strengthen — rather than undermine — electoral integrity.