Reddit limits abusive content by giving trolls fewer places to gather

Shutting down toxic forums may be a better anti-hate strategy than targeting ‘bad actors’

Reddit mascots are displayed at the company’s headquarters in San Francisco: Shutting down  forums dedicated to abusive content reduced the levels of hate on the popular online message board. Photograph: Reuters.
Reddit mascots are displayed at the company’s headquarters in San Francisco: Shutting down forums dedicated to abusive content reduced the levels of hate on the popular online message board. Photograph: Reuters.

There are, and always have been, and probably always will be, trolls, scoundrels and reprobates on the internet.

It is a problem that has vexed multibillion-dollar corporations and the smartest computer programmers in the world. Facebook, Twitter and YouTube have all declared war on abuse and harassment, spent years training sophisticated algorithms and hired vast armies of moderators to root out hateful content.

And yet, the trolls persist. But what if a better way of combating online toxicity were right under our noses?

A new study by researchers at Emory University, Georgia Institute of Technology and the University of Michigan suggests that the most effective anti-hate tactic may be what amounts to a nuclear option: identifying and shutting down the spaces where hateful speech occurs, rather than targeting bad actors individually or in groups.

READ SOME MORE

The researchers analysed 100 million posts originating on two forums on Reddit, the hugely popular online message board.

The forums, r/fatpeoplehate and r/CoonTown, were among several that Reddit administrators banned in 2015 as part of a sitewide crackdown on poisonous behavior. (In case the names weren’t a tipoff, fatpeoplehate was devoted to photos that mocked overweight people, and CoonTown was filled with racist bile.)

Researchers generated a list of hateful terms used on the two forums, and tracked the use of those terms across Reddit. They also compared the activity of users who posted hateful terms before the bans with those users’ activity after, to determine whether they had infiltrated other Reddit forums.

Effective bans

The goal was to figure out what happened when these toxic communities were shut down. Did the amount of hateful language on Reddit decrease? Did users of hateful forums migrate to other parts of the site? Did any of them change their behaviour as a result of the bans?

The study found that, to a large extent, the bans worked. Some users who had posted offensive material on the forums that were shut down stopped using Reddit entirely.

Of those who continued to use the site, many migrated to other forums, but they did not bring significant amounts of toxic speech with them, and the forums they moved to did not become more hateful as a result of their presence.

Overall, the users who stayed on Reddit after the bans took effect decreased their use of hate speech by more than 80 percent.

“By shutting down these echo chambers of hate, Reddit caused the people participating to either leave the site or dramatically change their linguistic behaviour,” the researchers wrote.

In an interview, two of the researchers who led the study told me that although they had only examined Reddit, their findings might be applicable to social networks like Facebook and Twitter, which tend to enforce their rules against individuals, rather than groups.

They also tend to issue bans in a defensive, case-by-case manner, often in response to user-generated reports of bad behaviour.

Proactive shutdowns

But the results of the study suggest that proactively shutting down nodes where hateful activity is concentrated may be more effective.

"Banning places where people congregate to engage in certain behaviors makes it harder for them to do so," said Eshwar Chandrasekharan, a doctoral student at Georgia Tech and the study's lead author.

Eric Gilbert, an associate professor at the University of Michigan and one of the researchers involved in the study, said that Reddit's approach worked because it had a clear set of targets. "They didn't ban people," he said. "They didn't ban words. They banned the spaces where those words were likely to be written down."

Social networks are increasingly feeling pressure to address hateful speech, not just for the sake of users but in response to legal and political challenges. German authorities, for example, have threatened to fine social networks, including Facebook and Twitter, up to €50 million ($53 million) for failing to remove harmful content in a timely manner.

As these platforms strategise about how to take on hate speech, it would be smart to study the geography of their networks - which groups, pages and subcommunities tend to encourage this behaviour - and the effect of closing those spaces, even without a specific violation or report of abusive speech.

It might seem odd to focus on a space, rather than on a person or an act. But as the Reddit example shows, the broadest approach is sometimes the right one.

- (The New York Times Service)