Brussels to act against social media firms over terrorist content

European Commission decides to abandon voluntary approach to removal of posts, clips

The draft legislation is likely to impose a limit of one hour for platforms to delete material flagged as terrorist content. Image: Getty
The draft legislation is likely to impose a limit of one hour for platforms to delete material flagged as terrorist content. Image: Getty

Brussels plans to force companies including Facebook, YouTube and Twitter to identify and delete online terrorist propaganda and extremist violence or face the threat of fines.

The European Commission has decided to abandon a voluntary approach to get big internet platforms to remove terror-related videos, posts and audio clips from their websites, in favour of tougher draft regulation due to be published next month.

Julian King, the EU’s commissioner for security, said that Brussels had “not seen enough progress” on the removal of terrorist material by technology companies and would “take stronger action in order to better protect our citizens”.

“We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon,” said Mr King.

READ SOME MORE

Although details of the regulation are still being drawn up inside the commission, a senior EU official said the draft legislation was likely to impose a limit of one hour for platforms to delete material flagged as terrorist content by police and law enforcement bodies.

The proposed regulation would be the first time that the EU has explicitly targeted tech companies’ handling of illegal content. Up to now Brussels has favoured self-regulation for tech platforms, which are not considered legally responsible for material on their websites.

In March, the commission toughened up its voluntary guidelines, encouraging the removal within one hour of material that incites terrorist violence or could radicalise users. Brussels promised to review of progress made within three months and reserved the right to come up with legislation.

Mr King said the draft regulation – which would need to be approved by the European Parliament and a majority of EU member states to come into force – would help to create legal certainty for platforms and would apply to all websites, regardless of their size.

“The difference in size and resources means platforms have differing capabilities to act against terrorist content, and their policies for doing so are not always transparent. All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform,” said Mr King.

Activity

Brussels’ crackdown on extremist activity comes in the wake of high-profile terror attacks in London, Paris and Berlin over the past two years. But the move to draw up legislation has been contested inside parts of the commission, which believes self-regulation has been a success on the biggest platforms that are most utilised by terrorist groups.

Google said more than 90 per cent of the terrorist material removed from YouTube was flagged automatically, with half of the videos having fewer than 10 views. Facebook said it had removed the vast majority of 1.9 million examples of Isis and al-Qaeda content that was detected on the site in the first three months of this year.

One EU official said the commission’s push for an EU-wide law targeting terrorist content reflected concern that European governments would take unilateral action. Germany this year enforced a high-profile “hate speech” law that targets anything from fake news to racist content. Companies must remove potentially illegal material within 24 hours or face fines of up to €50 million.

The EU still opts for self-regulation by platforms on more subjective areas such as hate speech and fake news.

– Copyright The Financial Times Limited 2018