The headlines this election cycle have been dominated by unprecedented events, among them former President Donald Trump’s criminal conviction, the attempt on his life, President Joe Biden’s disastrous debate performance and his replacement on the Democratic ticket by Vice President Kamala Harris. It’s no wonder other important political developments have been drowned out, including the steady drip of artificial intelligence-enhanced attempts to influence voters.
During the presidential primaries, a fake Biden robocall urged New Hampshire voters to wait until November to cast their votes. In July, Elon Musk shared a video that included a voice mimicking Harris’ saying things she did not say. Originally labeled as a parody, the clip readily morphed to an unlabeled post on X with more than 130 million views, highlighting the challenge voters are facing.
More recently, Trump weaponized concerns about AI by falsely claiming that a photo of a Harris rally was generated by AI, suggesting the crowd wasn’t real. And a deepfake photo of the attempted assassination of the former president altered the faces of Secret Service agents so they appear to be smiling, promoting the false theory that the shooting was staged.
When it comes to AI manipulation, the public has to be ready for anything.
Voters wouldn’t be in this predicament if candidates had clear policies on the use of AI in their campaigns. Written guidelines about how campaigns intend to use AI would allow people to compare candidates’ use of the technology to their stated policies. If a politician lobbies for watermarking AI so people can identify when it is being used, for example, they should be using such labeling on their own AI in ads and other campaign materials.
AI policy statements can also help people protect themselves from bad actors trying to manipulate their votes. And a lack of trustworthy means for assessing the use of AI undermines the value the technology could bring to elections if deployed properly, fairly and with full transparency.
It’s not as if politicians aren’t using AI. Indeed, companies such as Google and Microsoft have acknowledged that they have trained dozens of campaigns and political groups on using generative AI tools.
Government regulators have responded to concerns about AI’s effect on elections. But the Federal Election Commission’s chair announced last month that the agency was ending its consideration of regulating AI in political ads. Officials said that would exceed their authority and that they would await direction from Congress on the issue.
It’s likely too late in this election cycle to expect campaigns to start disclosing their AI practices. So the onus lies with voters to remain vigilant about AI — in much the same way that other technologies, such as self-checkout in grocery and other stores, have transferred responsibility to consumers.
Voters can’t rely on the election information that comes to their mailboxes, inboxes and social media platforms to be free of manipulation. They need to take note of who has funded the distribution of such materials and look for obvious signs of AI use in images, such as missing fingers or mismatched earrings. Voters should know the source of information they are consuming, how it was vetted and how it is being shared. All of this will contribute to information literacy, which, along with critical thinking, is a skill voters will need to fill out their ballots.
Ann G. Skeet and John P. Pelissero are with the Markkula Center for Applied Ethics at Santa Clara University. This column appeared in the Los Angeles Times.