Advertising is the lifeblood of social media platforms, but it’s also helping to fuel the spread of the “fake news” that threatens to complicate their business models and usher in a new era of regulation.
Restricting or redirecting how that advertising works on a platform could prove to be part of a solution to the problem, according to new research from MIT Sloan economist Catherine Tucker and Occidental College’s Lesley Chiou, PhD '05. The researchers found a 75 percent reduction in the amount of fake news being shared after Facebook rolled out a new advertising system designed to intercept “fake” news articles that contain “deceptive, false, or misleading content.”
Facebook’s platform was the alleged weapon of choice for Russian operatives working to sway the U.S. presidential election through propaganda campaigns, according to a February 2018 indictment from the U.S. Justice Department. The operatives allegedly created content designed for ease of sharing and then used Facebook's marketing tools to target it at the news feeds of people with strong opinions on particular — typically polarizing — issues, hoping they’d spread the false content out to their own network or groups they belonged to.
The study looked at how Facebook groups dedicated to one particular issue — the anti-vaccination movement, which claims that vaccines are ineffective and cause autism in children — spread misinformation on social media, and whether the new Facebook system banning fake news in its advertising networks was effective in limiting its spread. The idea behind studying the anti-vaccine movement was that it was less likely to be affected by the election news cycle, making it more straightforward to measure the effect of the ban.
The researchers found that Facebook groups help perpetuate fake news in two ways: They serve as “echo chambers” where members “like” posts from other users that reinforce their views or opinions; and they act as a dissemination tool when members share posts made in the group with their own wider social networks.
“A small fraction of authors account for a large majority of posts, which reinforces the concern that social media allows an individual to reach a wide audience and share information without editorial or fact-checking input,” the study read.
The researchers collected data on anti-vaccination movement articles published to Facebook before and after the November 2016 ban. After the restrictions were put in place, sharing of false and misleading content fell by 75 percent over the network’s pre-ban levels compared to those of rival social media platform Twitter, which made no changes to its advertising policies, the analysis found.
On Facebook, fake news shares dropped by 75 percent after the network's advertising ban.
“Our results suggest that advertising has a large influence on the spread of false news on social media,” the report read. “Approximately 75 percent of the popularity of fake news may be attributed to advertising. The policy measure of banning advertising of fake news presents an effective way of mediating the popularity of false information online.”
Fighting fire with fire
The same advertising that elevates the influence of false articles could be harnessed in a different way, by redirecting rather than stemming its power, the authors wrote.
The effect of negative advertising promoting false news on social media could be counteracted by a more aggressive push for positive advertising designed to disseminate accurate information. But, Tucker noted, such redirection requires human judgment and is hard to do at scale.
The authors cautioned that such a solution would be far more complex than simply adjusting how advertising is sold.
“The actions of platforms such as Facebook in regulating advertising do seem to have had an effect on the volume of fake news,” Tucker said. “However, our paper also emphasizes that in just focusing on ads and fake news, we are missing the bigger picture, which is the organic spread of misinformation by users themselves.”
Tucker added: “The popularity of fake news may occur in the absence of advertising, as users share articles with others in their social network, but working to stamp out misinformation in those posts runs into its own set of problems. Trying to regulate that seems to get us into very problematic First Amendment territory.”