Google launches tool to help curb online hate speech

AI software aimed at helping publishers identify abusive comments on internet

Google’s artificial-intelligence tool, Perspective, is being tested by a range of news organisations. Photograph: Josh Edelson/AFP/Getty Images
Google’s artificial-intelligence tool, Perspective, is being tested by a range of news organisations. Photograph: Josh Edelson/AFP/Getty Images

Google has launched an artificial-intelligence tool that identifies abusive comments online, helping publishers respond to growing pressure to clamp down on hate speech.

Google's freely-available software, Perspective, is being tested by a range of news organisations, including the New York Times, the Guardian and the Economist, as a way to help simplify the jobs of humans reviewing comments on their stories.

"News organisations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labour and time," said Jared Cohen, president of Jigsaw, the Google social incubator that built the tool.

“As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want.”

READ SOME MORE

Currently, the software is available to a range of publications that are part of Google's Digital News Initiative, including the BBC, the Financial Times, Les Echos and La Stampa, and theoretically to third-party social media platforms including YouTube, Twitter or Facebook.

The Irish Times is a recipient of funding from the Google Digital News Initiative.

“We are open to working with anyone from small developers to the biggest platforms on the internet. We all have a shared interest and benefit from healthy online discussions,” said CJ Adams, product manager at Jigsaw.

‘Toxic’ comments

Perspective helps to filter abusive comments more quickly for human review. The algorithm was trained on hundreds of thousands of user comments that had been labelled as "toxic" by human reviewers, on sites such as Wikipedia and the New York Times.

It works by scoring online comments based on how similar they are to comments tagged as “toxic” or likely to make someone leave a conversation.

“All of us are familiar with increased toxicity around comments in online conversations,” Mr Cohen said. “People are leaving conversations because of this, and we want to empower publications to get those people back.”

The New York Times trial resulted in reviewers being able to check twice as many comments in the same amount of time, as the algorithm helped to narrow down the pool of possibilities.

Google is not the first to attempt to curb trolling online. Earlier this month, Twitter stepped up its efforts by making tweaks to hide abuse from its users, rather than remove content from the platform completely.

Its chief executive Jack Dorsey tweeted at the time that Twitter was measuring its progress against abuse on a daily basis.

In May, US tech groups including Google, Facebook, Twitter and Microsoft signed a "code of conduct" with the European Commission that required them to "review the majority" of flagged hate speech within 24 hours, remove it if necessary and even develop "counter-narratives" to confront the problem. – (Copyright The Financial Times Limited 2017)