Facebook removed 837 million spam posts, disabled 583 million fake accounts and removed 21 million pieces of porn or adult nudity that violated its community standards in the first quarter of 2018.
The social media platform revealed the figures in its updated transparency report, which included details for the first time on how much content it removed from Facebook and the type of content it was taking down.
"It's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important, and what works," said Guy Rosen, Facebook's vice-president of product management.
Violence
The report covers the period between October 2017 to March 2018 and deals with content removed for graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.
The platform also revealed how much content its automated systems were picking up and how much was reported by users. For example, spam was almost completely dealt with by Facebook’s own systems. It said fake accounts make up around 3 to 4 per cent of active accounts.
In some instances, Facebook said it may cover graphic content with a warning that requires people to click to uncover it, to prevent it from accidentally being viewed by underage users. According to the report, this step is taken in instances where graphic content is being used to spread awareness or condemn violence, and doesn’t go against the company’s policies.
Facebook said that, for every 10,000 content views, an average of 22-27 contained graphic violence, up from 16-19 in the previous quarter, a rise that was attributed to the rising volume of graphic content being shared on Facebook. Some 3.4 million pieces of content were either removed or labelled with a warning during the period covered by the report, with Facebook’s improved detection systems picking up 85.6 per cent of the content it subsequently took action on.
Systems
Only 38 per cent of hate speech was flagged by the company’s systems, with 2.5 million pieces of content removed.
“For serious issues like graphic violence and hate speech, our technology still doesn’t work that well and so it needs to be checked by our review teams,” Mr Rosen said. ““It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.”
The company also removed 21 million pieces of content that contained adult nudity or sexual activity, flagging almost 96 per cent of the content with its own systems.
The company also published details on requests for information on users submitted by governments. A total of 52 requests from Irish authorities were received in the second half of 2017, with two classed as emergency requests. Facebook said it provided some data in 77 per cent of cases. Data was requested on 87 user accounts.
In total, more than 82,300 requests were received from authorities around the world, with Facebook providing data in almost 75 per cent of cases. Almost 31,000 requests came from the United States.