Scientists surprised themselves when they found they could instruct a version of ChatGPT to gently dissuade people of their beliefs in conspiracy theories — such as notions that COVID-19 was a deliberate attempt at population control or that 9/11 was an inside job.
The most important revelation wasn’t about the power of AI, but about the workings of the human mind. The experiment punctured the popular myth that we’re in a post-truth era where evidence no longer matters, and it flew in the face of a prevailing view in psychology that people cling to conspiracy theories for emotional reasons and that no amount of evidence can ever disabuse them.
“It’s really the most uplifting research I’ve ever done,” said psychologist Gordon Pennycook of Cornell University and one of the authors of the study. Study subjects were surprisingly amenable to evidence when it was presented the right way.
The researchers asked more than 2,000 volunteers to interact with a chatbot — GPT-4 Turbo, a large-language model — about beliefs that might be considered conspiracy theories. The subjects typed their belief into a box and the LLM would decide if it fit the researchers’ definition of a conspiracy theory. It asked participants to rate how sure they were of their beliefs on a scale of 0 to 100. Then it asked the volunteers for their evidence.
The researchers had instructed the LLM to try to persuade people to reconsider their beliefs. To their surprise, it was actually pretty effective.
People’s faith in false conspiracy theories dropped 20 percent, on average. About one-quarter of the volunteers dropped their belief level from above to below 50 percent. “I really didn’t think it was going to work, because I really bought into the idea that once you’re down the rabbit hole, there’s no getting out,” Pennycook said.
The LLM had some advantages over a human interlocutor. People who have strong beliefs in conspiracy theories tend to gather mountains of evidence — not quality evidence, but quantity. It’s hard for most nonbelievers to muster the motivation to do the tiresome work of keeping up. But AI can match believers with instant mountains of counter-evidence and can point out logical flaws in believers’ claims. It can react in real time to counterpoints the user might bring up.
After the experiment, the researchers reported, some of the volunteers said it was the first time anyone, or anything, had really understood their beliefs and offered effective counter-evidence.
Before the findings were published this week in Science, the researchers made their version of the chatbot available to journalists to try out. I prompted it with beliefs I’ve heard from friends: that the government was covering up the existence of alien life, and that after the assassination attempt against Donald Trump, the mainstream press avoided saying he had been shot because reporters worried that it would help his campaign. And then, inspired by Trump’s debate comments, I asked the LLM if immigrants in Springfield, Ohio, were eating cats and dogs.
When I posed the UFO claim, I used the military pilot sightings and a National Geographic channel special as my evidence, and the chatbot pointed out some alternate explanations and showed why those were more probable than alien craft. It discussed the physical difficulty of traveling the vast space needed to get to Earth, and questioned whether it’s likely aliens could be advanced enough to figure this out yet clumsy enough to be discovered by the government.
On the question of journalists hiding Trump’s shooting, the AI explained that making guesses and stating them as facts is antithetical to a reporter’s job. If there’s a series of pops in a crowd, and it’s not yet clear what’s happening, that’s what they’re obligated to report — a series of pops. As for the Ohio pet-eating, the AI did a nice job of explaining that even if there were a single case of someone eating a pet, it wouldn’t demonstrate a pattern.
That’s not to say that lies, rumors and deception aren’t important tactics humans use to gain popularity and political advantage. To gossip is human.
But now we know they might be dissuaded with logic and evidence.
F.D. Flam is a Bloomberg Opinion columnist covering science.