Could an AI chatbot talk you out of believing a conspiracy theory?
The findings of a study by David Rand and co-authors indicate people's minds can be changed with facts, despite pessimism about that prospect.
The findings of a study by David Rand and co-authors indicate people's minds can be changed with facts, despite pessimism about that prospect.
"We see that the AI overwhelmingly was providing non-conspiratorial explanations for these seemingly conspiratorial events."
A lot has been written about conspiracy theories on the internet, making them very well represented in the AI chatbot model's training data.
The conversations "fundamentally changed people's minds. The effect didn't vary significantly based on which conspiracy was named and discussed."
"It is the facts and evidence themselves that are really doing the work here."
"In a recent paper published in Nature, we found that a simple accuracy nudge...improved the quality of the news [people] shared afterward."
"People fall for fake news when they rely on their intuitions and emotions, and therefore don't think enough about what they are reading."
"...the point is that the platforms are, by design, constantly distracting people from accuracy.”
Speed, distraction and emotions can obscure a person's ability to sniff out misinformation on social media.
When a user shares something...It seems that he’s mostly trying to impress his followers and entertain them.