Subscriber OnlyBig TechAnalysis

Regulating social media is an impossible job but someone has got to at least try

Coimisiún na Meán’s online safety code is as much about what it doesn’t do as what it does

The regulator is seeking ‘a step-change in platform behaviour’, says online safety commissioner Niamh Hodnett. Photograph: Niall Carson/PA Wire
The regulator is seeking ‘a step-change in platform behaviour’, says online safety commissioner Niamh Hodnett. Photograph: Niall Carson/PA Wire

The era of social media self-regulation is over. That, at least, is the message from both Taoiseach Simon Harris and Coimisiún na Meán’s online safety commissioner, Niamh Hodnett, the regulator with ambition to place some semblance of moral obligation and compliance on the internet‘s notorious free-for-all.

It’s an impossible job, but someone has got to at least try to do it.

That much of the now-finalised online safety code is about protecting children in a world that exposes them to harm on an almost routine basis lends the code a better-late-than-never dimension.

New online safety code to protect children on social platforms including Facebook, YouTube and TikTokOpens in new window ]

With an election nearing, it is a low-risk strategy for Harris to personally associate himself with a bid to do something about harmful and illegal online content, even if that bid faces immense hurdles – not least the likely recalcitrance of some of the nine online platforms covered by the code and the openly rogue behaviour of one, Elon Musk’s X.

READ MORE

Coimisiún na Meán has been careful to outline what the code doesn’t do. This includes anything at all to tackle the urgent problem of toxic algorithms – “recommender systems” that can send vulnerable people, such as those with eating disorders, down damaging online rabbit-holes.

They may be addressed eventually under the Digital Services Act, with the European Commission recently opening up investigations into those used by TikTok and Meta. But anyone worried about their effects will have to wait.

The regulator doesn’t mandate specific age assurance or age verification methods for use in situations where those new obligations apply, though it has suggested various techniques, including two dubious ones: the uploading of documentary evidence and/or live selfies, in the hope that this will limit child access to pornography and “gratuitous violence” online.

Information obtained as part of this process cannot be used to profile users for advertising purposes, it stresses. But this element of the code is controversial. The regulator is asking users to place their trust in platforms that no one has much reason to trust.

Coimisiún na Meán is not a content moderator, nor an appeals body for moderation decisions made by platforms. But it can accept complaints about their systems and processes. It will track particular issues and it will not be afraid to sanction, it says.

One of the difficulties here is fatigue: our fatigue. Users are so jaded by platforms’ see-no-evil complaints mechanisms, that many have given up reporting hate speech and other harmful content. Digital services commissioner John Evans hopes people will try to get past this. “There is value in reporting,” he says. If nothing else, it triggers platforms’ legal obligations.

The regulator is seeking “a step-change in platform behaviour”, says Hodnett. The effort involved is considerable. But amid an explosion of ever more sophisticated artificial intelligence (AI) tools, the lag between cultural change and the starting gun on regulators’ race to catch up is grimly clear.