Subscriber OnlyTechnology

Riots, online safety and rights prove awkward bedfellows

Legislators are struggling to balance protection on the one hand and with the need to avoid disproportionate restriction on the other

There is abundant evidence that online platforms and messaging apps are used for igniting and organising riots and focused destructive acts. Photograph: Christopher Furlong/Getty Images
There is abundant evidence that online platforms and messaging apps are used for igniting and organising riots and focused destructive acts. Photograph: Christopher Furlong/Getty Images

Online safety legislation has popped back into headlines everywhere after a spate of destructive riots staged by far-right protesters, particularly in the UK.

Recent elections in the EU and the UK, and looming US elections in November, have also sparked an ongoing torrent of abuse, falsehoods and disinformation across social media platforms – sometimes ring-led by the billionaire owners of said platforms.

“Jerks drive clicks”, as academic and entrepreneur Vivek Wadhwa writes in an opinion piece this week in Fortune. He’s referring to the online actions of certain aggressively posturing tech CEOs, and how outrageous online posts too often get picked up and amplified by a media reliant on online traffic to drive advertising income.

But the observation is as relevant to the broader online world in which abuse, hatred, disinformation and image and video manipulations can quickly spread.

READ MORE

It’s widely recognised that social media platforms in particular remain a wild west. Years of study offers abundant evidence that online platforms and messaging apps are used for spreading hatred against individuals and groups – vulnerable immigrants and asylum seekers in the recent violent protests – and for igniting and organising riots and focused destructive acts, such as arson.

It’s also well documented that online abuse translates into reprehensible real-world incidents. And for at least a decade, particularly since the Facebook/Cambridge Analytica scandal, researchers and investigative journalists have gradually untangled how platforms can be manipulated for data gathering and highly targeted political campaigning and advertising that can potentially sway elections.

In the current political climate, voters have clearly signalled that they want and expect preventive solutions, and that they expect the instigators of these damaging and disruptive acts to be held liable for their online and offline actions. So, politicians have been spurred into looking more closely at ways of controlling the online world, in order to limit its more worrisome effects on the real world.

There’s little dissent internationally on the question of whether platforms need better oversight and content control. The problem is how this is to be done, and who has the responsibility to do it.

Add to that, the daunting consideration of whether any single state, country or region can impose controls that realistically manage amorphous, borderless platforms operating across dozens of international jurisdictions, each with its own societal and legal norms. A goal that appears simple – stop the abuse! – is actually deeply challenging.

Many countries, including Ireland and Britain, have some degree of online safety legislation. The EU has imposed controls, too, as have some US states, though there’s no US federal legislation (as of yet).

Other countries, such as Sri Lanka and Malaysia, have accelerated efforts in recent weeks to bring in their own online safety acts, probably driven by the spectre of the appalling UK riots.

But there are double-edged challenges with such legislation, everywhere. These laws and proposals must both provide adequate protections yet not impose disproportionate restrictions that threaten important civil and human rights, such as variously-interpreted speech and dissent rights.

Nor can they violate key privacy and data protections.

Recent demands – in the UK, the US and Ireland – that such laws be made stronger and more effective, inevitably fail to realistically grapple with how this might be done.

Already, the hard question of “how” has led many jurisdictions – including the EU and Ireland – to do the easy work of legislatively stating “down with this sort of thing” but failing to provide much detail on the specifics of what and how.

The EU’s powerful Digital Services Act, which imposes aspirational but too often undefined regulation on the big platforms, remains vague on such conundrums (as does the EU’s recent landmark AI regulation). So too does Coimisiún na Meán, which is the official Irish national regulator for the DSA. Because so many of the big technology companies and platforms are based in Ireland, it’s also effectively the principal European regulator.

Coimisiún na Meán is implementing an online safety code too in its role as enforcer of part of the Online Safety and Media Regulation Act 2022. The specifics of violations and the “how” of managing them isn’t yet clear.

Then too there’s the threat that unsound legislation, made in the heat of political debate and posturing, will be challenged and thrown out, affecting prosecutions and punishments.

A warning sign comes from California, which tends to lead the US in legislating around digital rights and protections. Last week, it had a key part of its strong online safety Bill for children blocked by a federal court. The court objected to the part that required businesses to “opine on and mitigate the risk that children may be exposed to harmful or potentially harmful materials online”, which judges said violated First Amendment constitutional rights to free speech.

The decision might irrevocably impair the remainder of the Bill. It may also hobble the proposed federal Kids Online Safety Act, recently passed in the US senate.

And it flags just how difficult it is everywhere to try to create or strengthen online safety legislation, which will have to find ways to balance but not cripple important, equally valid competing rights and protections.