Facebook moderators’ removal of content hit by Covid-19 pandemic

Virus ‘facts’, hate speech, self-harm and child abuse policed by technology and people

Facebook removed more than seven million pieces of Covid-19 misinformation between April and June, including fake preventative measures or exaggerated cures. Photograph: Josh Edelson
Facebook removed more than seven million pieces of Covid-19 misinformation between April and June, including fake preventative measures or exaggerated cures. Photograph: Josh Edelson

Facebook removed millions of posts that promoted misinformation on the coronavirus pandemic, and also saw a large increase in the number of posts it removed for breaking hate-speech rules in the second quarter of the year.

But the Covid-19 pandemic also resulted in fewer content moderators being available to review content, resulting in a dip in the level of action taken against posts that breached community guidelines in certain sensitive categories, such as suicide and self-harm, and child exploitation.

The social media giant said it had removed more than seven million pieces of Covid-19 misinformation it deemed harmful from its platforms between April and June, taking down posts that pushed fake preventative measures or exaggerated cures that health experts have deemed dangerous.

It has also applied warning labels to some content, working with independent fact-checkers to apply the labels to 98 million pieces of Covid-19 misinformation on Facebook.

READ SOME MORE

The company’s technology picked up the majority of the 22.5 million pieces of content that it classed as hate speech on Facebook, detecting 95 per cent of it before the content was reported by users. That was a major increase from the 9.6 million hate-speech posts that Facebook removed in the first three months of the year, with the company attributing the increase to automation across a number of different languages.

Facebook also removed 23 different banned organisations, more than half of which supported white supremacy.

Community standards

On Instagram, more than 808,000 pieces of content were removed on the grounds of hate speech, with the company's technology detecting 84 per cent.

The social media networks were more reliant on technology to remove content that broke its community standards as the Covid-19 outbreak pushed moderators out of the office. Facebook's vice-president of integrity, Guy Rosen, said people would continue to play an important role working alongside technology, to help measure and tune the automation.

Despite improvements in technology, Facebook continues to rely heavily on human reviewers to deal with suicide, self-injury and child-exploitative content, and to help improve the technology that finds and removes similar or identical content.

Facebook’s content reviewers were sent home in March as the pandemic took hold, limiting the content they could monitor. That meant action was taken on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, which saw action taken against only 275,000 pieces of content in the three-month period on Instagram and 917,000 posts on Facebook.

Sexual exploitation

It also decreased the action taken against child nudity and sexual exploitation on Instagram, which halved to under 500,000 posts. Facebook removed 9.5 million posts it considered child nudity or exploitation, up from 8.6 million in the previous quarter.

“Despite these decreases, we prioritised and took action on the most harmful content within these categories. Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible,” Facebook said in a Newsroom post.

The company is updating its policies to account for certain types of hate speech, including content depicting blackface or stereotypes about Jewish people controlling the world.

“We don’t benefit from hate, we don’t want it on our platforms,” said Mr Rosen.

Facebook also took more action against terrorism content, bullying and harassment on both Facebook and Instagram, and graphic sexual and violent content.

The social media network said it would continue to fight co-ordinated inauthentic behaviour, voter suppression and misinformation in the run-up to the US presidential election in November.

Facebook also pledged to undergo an independent, third-party audit of its content-moderation systems, starting in 2021.

Ciara O'Brien

Ciara O'Brien

Ciara O'Brien is an Irish Times business and technology journalist