AI’s Role in Political Manipulation

In this article, Søren H. Dinesen, CEO of Digiseg, explores the complex world of AI in politics, its benefits and risks, and examines why oversight and regulation are crucial for preserving democracy.

In an era where technology shapes our daily lives, generative AI has emerged as a powerful force in the political landscape, and its role is both revolutionary and potentially dangerous.

With generative AI, politicians can create targeted campaign ads, amplify campaign messages, and engage voters, but there’s a downside. Consider how AI can also be misused for creating convincingly fake campaign ads, disseminating disinformation, and turning voter outreach into voter manipulation.

Dive in as Dinesen delves into the complex world of AI in politics, examining its benefits, risks, and the urgent need for regulation to protect the integrity of democratic processes.

Generative AI and Politics

OpenAI certainly changed the world in November 2022 when it introduced ChatGPT, the first popular and widely available generative AI tool. Public reaction was varied. Many warned it was the end of numerous careers (indeed, the Hollywood writers’ strike was partially due to fear that ChatGPT would eliminate their jobs).

And a great many experts worried that ChatGPT would usher in a new era of fake news, disinformation, and more believable scams as generative AI can create text that feels legitimate to the average person. This isn’t an idle fear as one study found that large language models (LLMs) can outperform human authors in terms of convincing people.

Election officials are sounding the alarm over the use of generative AI in creating political ads, phony but convincing campaign fundraising letters, as well as orchestrating voter outreach initiatives. These officials weren’t wrong; we’ve already seen generative AI used for such purposes. In January 2024, registered Democratic voters in New Hampshire received fake Joe Biden robocalls telling them not to vote in the primaries so that they could save their vote for November.

This is not to say that all use cases for generative AI in the political sphere are nefarious. Many legitimate political parties and candidates see generative AI as a useful tool in amplifying the impact of their political ads. For instance, they can use it to deliver highly targeted ads at the household level, including those encouraging voter turnout. In fact, generative AI can help less-resourced campaigns compete against well-funded ones.

That said, generative AI can (and likely will) have harmful impacts on elections across the world, and it’s well worth our time to be aware of its dangers, and take steps to mitigate them.

Insufficient Oversight in AI-Generated Political Ads

There’s no doubt that AI can create high-quality text that many people and voters find quite credible. But therein lies the danger. 

Most reasonable people assume that the ads they hear or see have been endorsed by a campaign and vetted by the media source that runs them. In the US, radio and television ads end with the candidate saying, “I’m [candidate name] and I approve this message.” Internet-based ads are exempt from this disclosure requirement, a loophole that the Honest Ads Act of 2017 sought to close (it didn’t pass).

Today, few regulations require political ads to disclose the role of AI in their creation. The one exception is the EU AI Act, which classifies AI systems used to influence voters in political campaigns as “high-risk” and therefore subject to strict regulations.

The United States government has failed to enact a national AI disclosure law, even as the 2024 presidential election looms. In the absence of a national law, a dozen or so states enacted laws regulating the use of AI and deepfakes (more on that later) in political advertising and requiring disclosure. Those states are California, Florida, Idaho, Indiana, Michigan, Minnesota, New Mexico, Oregon, Texas, Utah, and Wisconsin. Additionally, Google said last year it would require AI disclosure on political ads, and Meta soon followed suit.  

But there are challenges to these efforts. Common Cause, an advocacy group focused on promoting ethics, accountability, and reform in government and politics, says the Florida law is too weak to be effective as it imposes fines, but no mechanisms for removing offending ads. In Wisconsin, the Voting Rights Lab warns that the state law is too narrow, regulating only candidate campaigns and not special interest group ads.

The bigger challenge is that it’s up to the ad creators to self-disclose, an unlikely event for people bent on fear-mongering, and even if an ad is deemed violative, it will still be in circulation for it is spotted and identified. In other words, AI-generated ads with misinformation will still have ample opportunities to be seen and believed by a great many voters.    

Generative AI Hallucinations

Another challenge is AI hallucinations. Most AI tools warn the user that responses may contain incorrect information (see graphic below), which means a campaign may willingly or inadvertently create campaign ads containing false information.

This isn’t a theoretical concern. Research from a European non-profit organization, AI Forensics, found that one out of three answers provided by AI was wrong. Microsoft’s Bing search bot gave wrong answers when asked basic questions about elections in Germany and Switzerland, often misquoting its sources.

In the United States, misleading and incorrect responses from chatbots threaten to disenfranchise voters. AI-generated responses told users to vote at locations that don’t exist or aren’t official polling stations. Columbia University tested five AI models, and all failed to provide accurate responses to basic questions about the democratic process.

In the U.S., misinformation about voting times and locations is a tried-and-true voting suppression tactic, so it’s concerning that generative AI will allow its practitioners to be more effective.

Inherent Bias of Generative AI

All AI is trained on data; the accuracy of the AI is wholly driven by how well the training data is vetted and labeled. Data is often inherently biased for many reasons. In the political sphere, LLMs are trained on news stories that concern elections and candidates, but liberal news sites block AI bots as a matter of course, whereas right-wing ones welcome them. The result is that the AI models are trained on data skewed to a particular point of view that may not reflect a total body of opinion. 

Going further, some people intentionally seek to influence the responses of a chatbot. In 2023, The New York Times reported that David Rozado, a researcher in New Zealand, used prompt engineering to create right-wing ChatGPT. This revised chatbot was intentionally designed to give right-wing answers

Political Manipulation

Perhaps the biggest concern is that AI will be used to manipulate the voter, as the fake Biden robocalls sought to do.

This isn’t a new fear, of course, as we’ve seen AI used in political manipulation long before the widespread availability of ChatGPT. For instance, in the 2018 midterm elections in the US, election officials were warning voters to be aware of deep fakes. To raise awareness of just how realistic deep fake videos can seem, Oscar-winning filmmaker Jordan Peele created a video in which a fake Barack Obama says “stay woke.” The message is clear: don’t believe what you hear on the internet.

Despite the warning, deep fake videos and images appear in the media.  In June 2023, Florida Gov. Ron DeSantis’s presidential campaign shared fake AI-generated images depicting Donald Trump embracing Dr. Anthony Fauci, the former head of the National Institute of Allergy and Infectious Diseases (NIAID) and someone who Trump came to loathe. Trump supporters targeted African Americans with fake AI images, as part of a strategic ploy to convince voters that Trump is popular among Black voters.

Deep fakes also played a key role in the 2023 Argentine elections. Candidate Sergio Massa’s team created a video featuring his main rival, Javier Milei, describing the revenues that could be gained by selling human organs and suggesting that parents could consider having children as a “long-term investment.” Despite the video’s explicit AI-generated label, it was quickly shared on different platforms without disclaimers.

Over in Turkey, President Recep Tayyip Erdoğan’s staff shared a video depicting his main rival, Kemal Kiliçdaroğlu, receiving the endorsement of the Kurdistan Workers’ Party, a designated terrorist group. Although this video was clearly fabricated, it didn’t stop voters from viewing and sharing it widely. 

Given what we’ve already seen occur, it’s no surprise that election experts call generative AI a “political super-weapon.” Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, takes it one step further, saying that AI poses “epoch-defining” risks, including the widespread proliferation of disinformation.

When People Aren’t Real: The Rise of Bots & Psychochats

There’s one final threat to consider: AI posing as humans to sway how people think and ultimately vote. Once again, nefarious players have access to sophisticated tools to help them deploy their schemes.

For instance, bots have been effective at disseminating disinformation with a great deal of speed and efficiency. In 2019, The New York Times reported that Epoch Media Group created over 600 fake media profiles, all featuring profile photos generated by AI. Those profiles were then deployed to distribute fake news and disinformation.

It’s not that hard to come up with AI-generated profile pics; a simple Google search serves up numerous sites allowing you to create realistic headshots and photos for social media. These bots can then be used to engage with voters who may be on the fence or provide people, who are intent on voting, to go to a non-existent polling station.

Psychochats goe one step further. These are avatars of candidates and are deployed online to interact with potential voters. It’s only a matter of time before psychochats are used by campaign opponents to spread misinformation on their rivals, similar to Sergio Massa’s smear campaign against Javier Milei.

Think this is too outlandish to be true? Politico reports that Meta is already experimenting with licensed AI celebrity avatars. And, Hello History invites users to “have in-depth conversations” with historical people of the past.

Democracy in Peril: Why We Must Act

When elections are marked by rampant misinformation, the very foundation of democracy is compromised. At the end of the day, misinformation leads to the formation of governments formed under false pretenses. Chaos results when governments lack the necessary legitimacy to govern effectively. 

The erosion of trust brought on by deep fakes, AI-generated lies, and psychochats undermine the democratic process, ultimately threatening the stability of societies. Never has it been more important to protect the integrity of information during election cycles. AI tools are cool and offer tremendous benefits to everyone in the digital media industry. But we must also acknowledge their potential for abuse, and work tirelessly to control how they’re used.