recent

Accelerated research about generative AI

Disciplined entrepreneurship: 6 questions for startup success

Startup tactics: How and when to hire technical talent

Credit: Mimi Phan

Ideas Made to Matter

Social Media

As content booms, how can platforms protect kids from hateful speech?

By

Given more than 500 hours of video are uploaded to YouTube every minute, the variety of content is almost endless: colorized footage of cities from the 1890s, ride-alongs in Blue Angels, Jimi Hendrix playing backup guitar, tutorials on how to eat sushi, breed mice, tie shoes, grow figs, carve a canoe, or curl your handlebar mustache.

And then there are the comments. From July to September of 2019, YouTube purged roughly half a billion comments in violation of the company’s hate speech policy — a twofold increase over the previous quarter. The same year YouTube introduced a setting to automatically hide toxic comments until channel owners could review them.

“We often talk about the idea of viral videos or virality in social media,” said a marketing professor at MIT Sloan. “We were interested in the dark side of that: How viral is hate? How viral is the use of abusive language towards children?”

In a new working paper coauthored with University of Washington assistant professor Uttara Ananthakrishnan, Tucker examines the various kinds of toxic and hateful comments and how they proliferate on videos by and for children. The results demonstrate how an initial hateful comment opens the door for others to follow — a problem that could largely be avoided, the researchers argue, if content platforms automatically flagged and concealed potentially hateful comments prior to reviewing them.

In their study, Tucker and Ananthakrishnan gathered roughly 110 million comments from 55,000 videos created by 200 different kids or teenagers with their own YouTube channel. The comments were collected in 2017. They analyzed these comments first using a few thousand keywords suggestive of explicit or hateful content; after that, they used a natural language program to classify hateful or toxic content. In the end, they found that one in every 20 comments contained something inappropriate — an effect that was especially pronounced on channels created by preteens with more than 1.5 million followers. On top of this, the presence of one hateful comment increased the probability that similar comments would appear.

“What we saw was not simply the upfront cost of one offensive comment,” Ananthakrishnan said. “When YouTube didn’t take action, then things quickly went viral as new social norms got established; once one person broke the norm, people seemed to get comfortable and the conversation turned nasty.”

The researchers highlight the unique salience of these results among children, who are both drawn to YouTube — roughly 30% of American and British kids say they want to make the creation of YouTube videos their career — and particularly vulnerable to hateful comments. The effects of cyberbullying on children’s mental health are well-documented in the medical literature, with evidence of increased rates of depression, self-harm, and suicidal ideation among its victims.

Tucker and Ananthakrishnan provide a straightforward solution to this problem: Tech platforms should err on the side of caution by acting quickly (with algorithms) to hide potentially harmful comments. This is especially important in the case of trolls, who tend to poison the comments section of YouTube within the first few days of a video being posted; after that, their activity drops off precipitously. Once a comment has been flagged in this narrow but critical window, algorithms or manual review can determine what should and should not be published.

Related Articles

25 ways to fix social media
The promise and peril of The Hype Machine
Social media advertising can boost or beat fake news

In June 2019, YouTube debuted a feature in which inappropriate comments are automatically flagged and held for creators to review. This came in the wake of controversy around sexually predatory comments posted below videos of children. The feature became the default for all channels in June 2020, though creators can turn it off on their channel.

“As YouTube did, companies should hide these comments first, and then get them under review,” Ananthakrishnan said. “Tech companies want to be in the right when it comes to censoring and removing content — they make millions of decisions like this every day — but we’re seeing the costs of policies that focus on accuracy alone at the expense of exposure.”

The researchers caution against wholesale removal of comments on videos made by children. First, the classifications of “kid-directed” is a little unclear especially if videos feature an entire family, or if the video is made by adults for kids. Second, the enforcement of this policy is difficult to scale. Third, creators rely on engagement of their viewers, and engagement often comes through comments.

It is, in the end, a complicated problem, but one that Ananthakrishnan suggests is not going away anytime soon. “Tech companies like to believe that they simply bring people together. This has been the case in a lot of contexts, but these platforms exist in a society filled with people who have their own biases, and anonymity can bring out a lot of bad things,” she said. “If the last decade was all about these companies focusing on advertising and profits, I think we’re going to see that this decade must be about how to prevent bad things from happening — the kinds of things that have always happened in society, but which are now scaled up because of the vast reach and influence of these platforms.”

For more info Zach Church Editorial & Digital Media Director (617) 324-0804