Politics

AI-generated political ads are backfiring: Politicians are looking worse and voters are not impressed

Voters punish campaigns that flaunt AI, viewing synthetic messages as deceptive; experiments and polls show trust, persuasion and turnout drop.

Voters punish campaigns that flaunt AI, viewing synthetic messages as deceptive; experiments and polls show trust, persuasion and turnout drop.
Mike Segar
Update:

For political campaign managers looking to create powerful communications quickly and cheaply, the lure of AI is powerful. But it can also backfire, sometimes spectacularly. Voters are increasingly sensitive to the use of artificial intelligence, finding it fake and phony - messages politicians really want to avoid.

The use of AI has even led to huge fines and criminal charges. In New Hampshire a political consultant who sent AI-generated robocalls mimicking then President Joe Biden’s voice and telling voters not to vote was hit with a $6 million fine, though he was acquitted on criminal charges of voter suppression and impersonating a candidate.

Andrew Cuomo mocked for using AI

Former New York governor Andrew Cuomo, who was forced to resign in August 2021 over sexual harassment claims, lost his comeback campaign in New York’s Democratic primary this summer. But refusing to admit defeat, he’s running as an independent and this week launched an ad in which AI-generated clips show him doing various stereotypical New York City jobs.

Cuomo’s target, rival candidate Zohran Mandani wasn’t slow to mock the campaign, saying: “In a city of world-class artists and production crew hunting for the next gig, Andrew Cuomo made a TV ad the same way he wrote his housing policy: with AI. Then again, maybe a fake Cuomo is better than the real one?”

As Mandani said, this isn’t the only time Cuomo has been caught using AI - his housing plan was clearly produced using ChatGPT, a chatbot from OpenAI.

What do voters actually think of AI in politics

Across studies and polls, voters are broadly wary of AI in politics. Experimental work from NYU’s Center on Technology Policy found that when political ads carry AI-use disclaimers, respondents rate the sponsoring candidate as less trustworthy and say they are less likely to vote for them, with the penalty especially strong for attack ads.

National polling echoes this skepticism: a large AP-NORC/Harris survey reported bipartisan opposition to candidates using AI for creating or editing campaign content, tailoring personalized ads, or answering voter queries, and most respondents said AI will add to election misinformation.

Voters also dislike unlabeled AI and feel ill-equipped to spot it. A Savanta poll for the Guardian found bipartisan disapproval when politicians post AI-generated content without disclosure and low confidence in distinguishing real from fake visuals.

Pew Research likewise shows a broad expectation that AI in the 2024 campaign would be used mostly for bad rather than good, underscoring a hostile baseline toward AI-mediated political messaging.

Related stories

Get your game on! Whether you’re into NFL touchdowns, NBA buzzer-beaters, world-class soccer goals, or MLB home runs, our app has it all.

Dive into live coverage, expert insights, breaking news, exclusive videos, and more – plus, stay updated on the latest in current affairs and entertainment. Download now for all-access coverage, right at your fingertips – anytime, anywhere.

Tagged in:
Comments
Rules

Complete your personal details to comment

We recommend these for you in Latest news