Subscriber OnlyOpinion

Taoiseach, here’s how to take action against extreme material online

There are powerful tools to counter the amplification of extreme material. We just need to use them

Social media algorithms shape the world we and our children see through our social feeds. Photograph: Kenneth Cheung/iStock
Social media algorithms shape the world we and our children see through our social feeds. Photograph: Kenneth Cheung/iStock

Social media promised to bring us closer together. Instead, it pushes videos of self-harm and suicide at our kids, and amplifies hysteria and division in our society.

What should be done? The Taoiseach’s assertive position on digital platforms, and a week of violence and rioting in several UK cities, make this question timely. Simon Harris is signalling that he is serious and will personally lead action.

His leadership is welcome, and should be focused on the key issue: this is a data problem, not a speech problem.

Niches of extreme opinion are a constant of human history. What is different today is that extreme niche opinions are artificially boosted for profit by algorithms on digital platforms. Without this artificial algorithmic amplification, extreme material would be lost in the deluge of things posted by other people in that same instant. It would be unseen except by a tiny niche.

READ MORE

Algorithms shape the world we and our children see through our social feeds. They learn our intimate tastes, and then feed us a personalised diet of sensationalism to keep us scrolling. Hate and hysteria are amplified because they are highly engaging, and the more time we spend scrolling, the more revenue platforms take in from selling ads in our feeds. The same logic pushes videos normalising self-harm and about suicide into children’s feeds on TikTok, and pushes extreme hatred of women into young boys’ feeds on YouTube.

Each person receives the perfect drop of poison for their individual ear, playing upon their worst instincts and driving them to extreme opinions. Meta’s secret internal research, leaked by whistleblower Frances Haugen, confirmed that its algorithm artificially pushes political extremes: if a person in the US followed only verified conservative news accounts, they were soon recommended extreme conspiracy content. This polarising force – and the capture of ad revenues that once sustained journalism – has derailed our political dialogue.

We have powerful tools to fix this problem. Recommender algorithms need intimate data about users to operate. But intimate data, including data that may reveal a person’s political views, enjoy particularly strong legal protection in the GDPR. Before feeding these data to their recommender algorithm, digital platforms are required to pass a very strict test: a person must be warned about the consequences, asked to switch the system on, and then separately be asked to confirm that this is really what they want to do.

Digital platforms do not do this, which means they are already in breach of EU law, and their recommender systems must immediately be switched off. Enforcing this would transform our online spaces at a stroke. With users back in control we can still have all the cat videos, celebrity gossip and arcane memes that make the internet wonderful. It is commonly held that Twitter was at its best before it introduced the Timeline algorithm in 2014. Ditto for other platforms dating back to the same era. Anyone who thinks otherwise will be free to decide to switch the algorithm on.

This data-focused approach avoids intrusion upon freedom of expression. It limits not speech but artificial amplification. It also has the virtue of practicality, unlike content moderation. A leaked document from inside Meta makes plain “we are never going to remove everything harmful from a communications medium used by so many, but we can at least ... stop magnifying harmful content by giving it unnatural distribution”.

If the Taoiseach is to succeed, he must focus his energies on what Meta calls “unnatural distribution”. There is much to be done. Some work has started, though it is not yet clear whether it has been useful. Last September, the Government published a scoping paper for the National Counter Disinformation Strategy (which the author, among many others, contributed to) that briefly acknowledges the role of algorithms. But the draft strategy itself contains no concrete measure to tackle the data problem. This must be fixed if the Government’s strategy is to have any impact. Arguments from the tech platforms and their lobbyists to the contrary should not be heeded.

Most importantly, the State should support the full enforcement of the GDPR by the Data Protection Commission (DPC) against recommender systems. Once the platforms have switched these dangerous systems off, the DPC must then carefully supervise how they ask users to switch them back on. Digital platforms have a history of using velveteen words couched in cute design to mask data chicanery.

Last month, New York state introduced a law to ban recommender algorithms in children’s social feeds. The same month, the US Kids Online Health and Safety Taskforce recommended that recommender algorithms be switched off by default for all children. In Europe, the GDPR already provides this. We only need to enforce it. Our responsibility to act is all the greater because almost all Big Tech EU headquarters are based here in Ireland. What we do or fail to do will affect every person in Europe.

Common sense dictates that digital platforms should not build intimate profiles about us or our children in order to manipulate us for profit by artificially amplifying suicide, hysteria and disinformation in our feeds. We the people – not Big Tech’s revenue-optimising algorithms – should decide what we see online, and what we choose to share with our friends. Controlling data, not speech, is the solution.

Dr Johnny Ryan is Director of Enforce at the Irish Council for Civil Liberties