According to a recent report by Rest of World on the use of AI in the 2025 state election in Bihar, political campaigns are increasingly leaning on artificial‑intelligence tools like ChatGPT, Claude, and voice‑cloning and video‑generation services to produce targeted campaign materials — often in local dialects — at scale. The result: video clips, speeches, and messages crafted specifically for different voter demographics, reaching people via social media, messaging apps, and community chat groups.
Campaign strategists say this approach dramatically cuts the cost and manpower needed for outreach, while enabling them to reach even remote or rural areas. For instance, voice‑cloned speeches in local dialects have helped candidates “connect” with voters in regions where traditional campaigning (rallies, door‑to‑door canvassing) would be difficult or expensive.
But with such power also come serious downsides. The same AI tools have enabled the creation of deepfakes and synthetic media — fake videos or audio that show politicians (or even deceased public figures) endorsing or criticizing candidates. This blurs the line between truth and fabrication, making it harder for voters — especially older ones, or those less familiar with technology — to distinguish genuine content from AI‑generated deception. According to Rest of World, distinguishing between real and AI‑made content proved “a real challenge.”
Even though regulators like Election Commission of India (ECI) have issued advisories urging disclosure of AI‑generated content, enforcement remains inconsistent. And critics argue that the current wave of AI‑powered campaigning deepens inequities: larger or better‑funded parties can exploit AI for broad reach, while smaller or resource‑constrained candidates may struggle to keep up — potentially skewing democratic fairness.