A year ago, Facebook announced a new approach to fighting the spread of misinformation: The company would survey its users about whether they were familiar with a particular news source and, if they were, follow up with a question about how much they trusted the source. This was the wisdom of the crowds put to noble end. The public response was immediate.
“Everyone freaked out about how it was the worst idea ever,” said David Rand, associate professor of management science and brain and cognitive sciences at MIT Sloan. The crowd, most people assumed, is not wise when it comes to finding reliable news sources; our partisan biases are too strong. Rand also thought this was the case. When a reporter got in touch asking his opinion of Facebook’s proposal, he responded that it would never work. Then, he realized that underlying his opinion was a straightforward empirical question: Can laypeople effectively discriminate between reliable and unreliable news sources?
That night, with his collaborator Gordon Pennycook, Rand cobbled together a survey of about 1,000 people who were asked to assess the reliability of 60 news sources. One-third of these were mainstream media, one-third were hyperpartisan, and one-third produced blantatly false “fake news.”
“It actually worked really well,” Rand said of the survey participants’ general ability to discern source quality. Though there were significant differences in how much Republicans and Democrats trusted various mainstream media sources — Republicans trusted Fox News the most, Democrats not so much — members of both parties strongly distrusted hyperpartisan and fake news sites on average. Rand and Pennycook then ran a second survey using a nationally representative sample and a different set of news outlets. They found the same thing.
When they brought in professional fact-checkers to assess the 60 news sources, “the layperson ratings were extremely highly correlated with the fact-checkers, up at a correlation of 0.9,” Rand said. “That’s the highest correlation I’ve ever gotten with any experiment I’ve run in my life, and it’s driven by the fact that neither laypeople nor professionals trust the fake or hyperpartisan news sites.”
These results, published in the “Proceedings of the National Academy of Sciences,” indicate that Facebook had the right intuition, but Rand noted one major flaw in the company’s proposed design: Facebook initially suggested using familiarity with a news source as a screening question. Only those familiar with a source would be allowed to assess its trustworthiness. “Our results show that this is a terrible idea,” Rand said. If you exclude the rankings of people who are unfamiliar with sites, then the partisan and fake news sites do almost as well as mainstream media sites; this is because only those who seek out such fringe sources tend to be familiar with them. “Lack of familiarity is actually a useful cue that the source is probably not reliable,” Rand said.
At the same time, Rand noted that if this approach were implemented it could create a hurdle for high-quality publications that are either new to the market or published for a niche audience. To avoid unfairly demoting their content in news feeds, he suggests that people who are unfamiliar with outlets should be given a sample of the content on which they can base their opinions.
Rand offered two more important caveats from his work. First, survey participants must be recruited randomly. If social media companies instead allow users to rank whatever content they want, “then the system can be easily gamed,” he said. Imagine, for example, Trevor Noah staring into the camera every night and urging his viewers to vote for the credibility of The Daily Show. Second, Rand pointed out that they only surveyed people in the United States. Whether this approach would work well in other countries remains to be seen.