Article 6QPBR Chatbots can persuade people to stop believing in conspiracy theories

Chatbots can persuade people to stop believing in conspiracy theories

by
Rhiannon Williams
from MIT Technology Review on (#6QPBR)
Story Image

The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and even leading to unnecessary deaths.

Now, researchers believe they've uncovered a new tool for combating false conspiracy theories: AI chatbots. Researchers from MIT Sloan and Cornell University found that chatting about a conspiracy theory with a large language model (LLM) reduced people's belief in it by about 20%-even among participants who claimed that their beliefs were important to their identity. The research is published today in the journal Science.

The findings could represent an important step forward in how we engage with and educate people who espouse such baseless theories, says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI's impacts on society.

They show that with the help of large language models, we can-I wouldn't say solve it, but we can at least mitigate this problem," he says. It points out a way to make society better."

Few interventions have been proven to change conspiracy theorists' minds, says Thomas Costello, a research affiliate at MIT Sloan and the lead author of the study. Part of what makes it so hard is that different people tend to latch on to different parts of a theory. This means that while presenting certain bits of factual evidence may work on one believer, there's no guarantee that it'll prove effective on another.

That's where AI models come in, he says. They have access to a ton of information across diverse topics, and they've been trained on the internet. Because of that, they have the ability to tailor factual counterarguments to particular conspiracy theories that people believe."

The team tested its method by asking 2,190 crowdsourced workers to participate in text conversations with GPT-4 Turbo, OpenAI's latest large language model.

Participants were asked to share details about a conspiracy theory they found credible, why they found it compelling, and any evidence they felt supported it. These answers were used to tailor responses from the chatbot, which the researchers had prompted to be as persuasive as possible.

Participants were also asked to indicate how confident they were that their conspiracy theory was true, on a scale from 0 (definitely false) to 100 (definitely true), and then rate how important the theory was to their understanding of the world. Afterwards, they entered into three rounds of conversation with the AI bot. The researchers chose three to make sure they could collect enough substantive dialogue.

After each conversation, participants were asked the same rating questions. The researchers followed up with all the participants 10 days after the experiment, and then two months later, to assess whether their views had changed following the conversation with the AI bot. The participants reported a 20% reduction of belief in their chosen conspiracy theory on average, suggesting that talking to the bot had fundamentally changed some people's minds.

Even in a lab setting, 20% is a large effect on changing people's beliefs," says Zhang. It might be weaker in the real world, but even 10% or 5% would still be very substantial."

The authors sought to safeguard against AI models' tendency to make up information-known as hallucinating-by employing a professional fact-checker to evaluate the accuracy of 128 claims the AI had made. Of these, 99.2% were found to be true, while 0.8% were deemed misleading. None were found to be completely false.

One explanation for this high degree of accuracy is that a lot has been written about conspiracy theories on the internet, making them very well represented in the model's training data, says David G. Rand, a professor at MIT Sloan who also worked on the project. The adaptable nature of GPT-4 Turbo means it could easily be connected to different platforms for users to interact with in the future, he adds.

You could imagine just going to conspiracy forums and inviting people to do their own research by debating the chatbot," he says. Similarly, social media could be hooked up to LLMs to post corrective responses to people sharing conspiracy theories, or we could buy Google search ads against conspiracy-related search terms like Deep State.'"

The research upended the authors' preconceived notions about how receptive people were to solid evidence debunking not only conspiracy theories, but also other beliefs that are not rooted in good-quality information, says Gordon Pennycook, an associate professor at Cornell University who also worked on the project.

People were remarkably responsive to evidence. And that's really important," he says. Evidence does matter."

External Content
Source RSS or Atom Feed
Feed Location https://www.technologyreview.com/stories.rss
Feed Title MIT Technology Review
Feed Link https://www.technologyreview.com/
Reply 0 comments