Credit: Gordon Johnson from Pixabay

Press

Social Media

What can be done to reduce the spread of fake news? MIT Sloan research finds that shifting peoples’ attention toward accuracy can decrease online misinformation sharing

By

Findings have implications for how social media companies stem the flow of false news 

Cambridge, Mass., March 17, 2021—Simple interventions to reduce the spread of misinformation can shift peoples’ attention toward accuracy and help them become more discerning about the veracity of the information they share on social media, according to new research led by  , Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences, at the MIT Sloan School of Management.

Rand conducted the research with his colleagues, Gordon Pennycook of the Hill/Levene Schools of Business at the University of Regina, Ziv Epstein, a doctoral student at the MIT Media Lab, Mohsen Mosleh of the University of Exeter Business School, Antonio Arechar, a research associate at MIT Sloan, and Dean Eckles, the Mitsubishi Career Development Professor and an Associate Professor of Marketing at MIT Sloan. The team’s findings are published in a forthcoming issue of the journal Nature. 

The study arrives at a time when the sharing of misinformation on social media—including both patently false political “fake news” and misleading hyperpartisan content—has become a key focus of public debate around the world. The topic gained prominence in 2016 in the aftermath of the U.S. presidential election and the referendum on Britain’s exit from the European Union, known as Brexit, during which fabricated stories, presented as legitimate news, received wide distribution on social media. The proliferation of false news during the COVID-19 pandemic, and this January’s violent insurrection in the nation’s Capital, illustrate that disinformation on platforms including Facebook and Twitter remains a pervasive problem.

The study comprises a series of surveys and field experiments. In the first survey, which involved roughly 850 social media users, the researchers found a disconnect between how people judge a news article’s accuracy and their decision of whether or not to share it. Even though people rated true headlines as much more accurate than false headlines, headline veracity had little impact on sharing. Although this may seem to indicate that people share inaccurate content because, for example, they care more about furthering their political agenda than they care about truth, Prof. Rand and his team propose an alternative explanation: Most people do not want to spread misinformation, but the social media context focuses their attention on factors other than truth and accuracy. Indeed, when directly asked, most participants said it was important to only share news that is accurate – even when they had just indicated they would share numerous false headlines only minutes before.

“The problem is not so much that people don’t care about the truth or want to purposely spread fake news; it’s that social media makes us share things that we would think better of if we stopped to think,” says Prof. Rand. “It’s understandable: scrolling through Twitter and Facebook is distracting. You’re moving at top speed, and reading the news while also being bombarded with pictures of cute babies and funny cat videos. You forget to think about what’s true or not. When it comes to retweeting a headline—even one you would realize was inaccurate if you thought about it—you fail to carefully consider its truthful because your attention is elsewhere.”

Subsequent survey experiments with thousands of Americans found that subtly prompting people to think about accuracy increases the quality of the news they share. In fact, when participants had to consider accuracy before making their decisions the sharing of misinformation was cut in half.

Finally, the team conducted a digital field experiment involving over 5,000 Twitter users who had previously shared news from websites known for publishing misleading content. The researchers used bot accounts to send the users a message asking them to evaluate the accuracy of a random non-political headline – and found that this simple accuracy prompt significantly improved the quality of the news the users subsequently retweeted. “Our message made the idea of accuracy more top-of-mind,” says Prof. Pennycook, who was the co-lead author on the paper with Mosleh and Epstein. “So, when they went back to their newsfeeds, they were more likely to ask themselves if posts they saw were accurate before deciding whether to share them.”

The research team’s findings have implications for how social media companies can stem the flow of misinformation. Platforms could, for instance, implement simple accuracy prompts to shift users’ attention towards the reliability of the content they read before they share it online. “By leveraging people’s existing but latent capabilities for discerning what is true, this approach has the advantage of preserving user autonomy. Therefore, it doesn’t require social media platforms to be the arbiters of truth, but instead enables the users of those platforms,” says Epstein. The team has been working with researchers at Google to develop applications based on this idea, and hope that social media companies like Facebook and Twitter will follow suit.

“Our research shows that people are actually often fairly good at discerning falsehoods from facts, but in the social media context they’re distracted and lack the time and inclination to consider it,” says Prof. Mosleh. “But if the social media platforms reminded users to think about accuracy—maybe when they log on or as they’re scrolling through their feeds—it could be just the subtle prod people need to get in a mindset where they think twice before they retweet” concludes Prof. Rand.

The MIT Sloan School of Management

The MIT Sloan School of Management is where smart, independent leaders come together to solve problems, create new organizations, and improve the world. Learn more at mitsloan.mit.edu.