The National Republican Senatorial Committee (NRSC) has released a 30-second AI-generated deepfake attack ad targeting Democratic Senate Minority Leader Chuck Schumer, sparking widespread controversy over the use of artificial intelligence in political messaging. The ad features Schumer repeatedly saying "Every day gets better for us" and grinning, with a narrator concluding, "The Schumer shutdown is making things worse across America and Democrats love it". Although Schumer did say those words in an interview with Punchbowl News, experts argue that using AI to create fake video footage crosses a line.
The NRSC defends the ad, stating that it's simply "visualizing" Schumer's comment and includes an AI disclaimer. However, critics argue that the disclaimer is not clear or obvious, particularly on social media platforms. Hany Farid, a University of California at Berkeley professor specializing in manipulated media, warns that such deepfakes are not just misleading but also erode public trust in all political content, real or not. He suggests that instead of creating a deepfake, the NRSC could have simply overlaid Schumer's quote on an image of him.³
The ad has sparked concerns about the growing prevalence of AI fakes in politics and the potential for misuse. Farid notes that while leaders shouldn't post deceptive deepfakes, they also risk undermining trust in authentic content. The NRSC's use of AI-generated deepfake footage raises important questions about the future of political campaigning and the ethical use of AI. Joanna Rodriguez, NRSC Communications Director, states, "AI is here and not going anywhere. Adapt & win or pearl clutch & lose".
The controversy surrounding the ad highlights the need for clearer guidelines and regulations on the use of AI in politics. As AI technology continues to advance, it's crucial that policymakers, tech companies, and political actors work together to establish standards that protect the integrity of information and prevent the misuse of AI in political campaigns. The implications of this technology extend beyond politics, raising concerns about the potential for misinformation campaigns and the erosion of trust in digital content.