recent

Job responsibilities of generative AI leaders

Cybersecurity plans should center on resilience

5 predictions for fintech in 2024

Credit: Rob Dobi

Ideas Made to Matter

Social Media

Study: ‘Accuracy nudge’ could curtail COVID-19 misinformation online

By

On February 19 in the Ukrainian town of Novi Sanzhary, alarm went up regarding the new coronavirus and COVID-19, the disease it causes. “50 infected people from China are being brought to our sanitarium,” began a widely read post on the messaging app Viber. “We can’t afford to let them destroy our population, we must prevent countless deaths. People, rise up. We all have children!!!”

Soon after came another message: “if we sleep this night, then we will wake up dead.”

Citizens mobilized. Roads were barricaded. Tensions escalated. Riots broke out, ultimately injuring nine police officers and leading to the arrests of 24 people. Later, word emerged that the news was false.

As the director-general of the World Health Organization recently put it, “we’re not just fighting an epidemic; we’re fighting an infodemic.”

Now a new study suggests that an “accuracy nudge” from social media networks could curtail the spread of misinformation about COVID-19. The working paper, from researchers at MIT Sloan and the University of Regina, examines how and why misinformation about COVID-19 spreads on social media. The researchers also examine a simple intervention that could slow this spread. (The paper builds on prior work about how misinformation diffuses online.)

The study is authored by the University of Regina’s Gordon Pennycook, MIT Sloan researcher Jonathon McPhetres, MIT Sloan PhD student Yunhao Zhang, and MIT Sloan associate professor.

In the first experiment, roughly 850 people were split into two groups. One group was asked to classify several news headlines about COVID-19 as accurate or not; the other group was given the same selection of headlines and asked whether or not they would share the story on social media. The result demonstrated that, for false headlines, 50% more people considered sharing them than rated them as accurate. In other words, people were 50% more likely to share misinformation than to believe it.

 

“Our participants could fairly effectively identify the accuracy of true versus false headlines when asked to do so, but they nonetheless were willing to share many false headlines on social media,” Rand said. “This suggests that the problem of people sharing misinformation is not that people just can't tell true from false.”

So why might people share what they know to be false? Not out of malice, the researchers propose, but, rather, because social media draws their attention to motivations besides accuracy, like attracting the recognition and plaudits of friends and followers. Whether true or not, evocative content is attractive.

A second experiment then looked at ways to counteract, or subdue, this impulse. Might a small intervention be available to reduce the sharing of misinformation? Participants were again split into two groups. Mirroring the first experiment, one group was asked their willingness to share news based on headlines, some true, others false. The second group was also asked about their willingness to share stories, but only after being asked to rate the accuracy of a single headline. This small “nudge” to get people thinking about accuracy made them more discerning when it came to sharing true or false news. Those who performed the task were less likely to share inaccurate news, and more likely to share accurate news.

50%

A new study found people were 50% more likely to share misinformation than to believe it.

And though the effect was small for these individuals, Rand noted this may not tell the full story. Downstream network effects can be far greater. “Improving the quality of the content shared by one user improves the content that their followers see, and therefore improves the content their followers share,” he said. “This in turn improves what the followers’ followers see and share, and so on. Thus, the cumulative effects of such an intervention may be substantially larger than what is observed when only examining the treated individuals.”

Whether these results pertain to worlds beyond social media remains an open question. What about information shared by email, or text, for example? But the basic finding is clear and urgent. Twitter, Facebook, and other social media platforms could add periodic pop-ups or in-page content showing a random headline and quizzing users about its truthfulness.

“Our experiment suggests that nudging people to pay attention to accuracy can improve the quality of COVID-19 related content they share online,” Rand wrote in an email. “This is a scalable intervention that social media platforms could easily implement. I hope they will!”

For more info Zach Church Editorial & Digital Media Director (617) 324-0804