This Halloween I’m dressing as someone who agrees with Rishi Sunak – my most frightening costume yet. The British prime minister has warned that the dangers posed by artificial intelligence (AI) must be tackled “head on” and definitely not with “heads in the sand”. Unless a rogue robot has dug a hole on the beach that sounds about right to me.
Granted the AI dangers weighing on my mind don’t precisely match the ones at the top of Sunak’s risk list. He’s worried that AI will be used to build chemical and biological weapons, leading to the destruction of humanity, whereas I’m more immediately concerned about generative AI being used to strip all joy from language and mangle it into untruths, ultimately leading to much the same outcome.
“Cheap sludge” is how Storyful and Kinzen founder Mark Little has described the potential onslaught of AI-regurgitated content.
Although even isolated instances of deepfakes are capable of causing harm, Little’s “big fear”, he told a European Broadcasting Union event in June, is the sheer wearying scale of AI’s contribution to the discourse: “We’re going to face a wave of cheap sludge generated by these bots. It is going to flood the information environment, it is going to cheapen everything, including us.”
How does VAT in Ireland compare with countries across Europe? A guide to a contentious tax
‘I was a cleaner in my dad’s office, which makes me a nepo baby. I got €50 a shift’
Will we have a tax liability if Dad gives us his home while he is alive?
Finding a solution for a tenant who can’t meet rent after splitting with partner
The “battle line” he highlighted was the one where “everyone wakes up in a year’s time and goes ‘I’m just so exhausted, I don’t know what’s true any more’” – a phenomenon dubbed “the liar’s dividend” by digital law professors Danielle Citron and Robert Chesney.
For many people it won’t have taken a year: the misinformation and disinformation – misinformation’s deliberate, malicious cousin – swirling around the Israel-Hamas war has exacerbated media mistrust, heightening the stakes for those reporting on the conflict.
Sunak’s big AI safety summit arrives at a fraught and critical time. The venue is Bletchley Park, the secret wartime code-breaking headquarters turned compelling visitor attraction that once housed Colossus, the world’s first programmable digital electronic computer.
Lessons offered by history may vary. Although Bletchley Park is held up as an example of ingenuity and excellence, after the war it was the US that did most of the running in the computer revolution, meaning it is possible to look upon these early British feats and dwell instead on lost momentum and what might have been.
Perhaps sensing a repeat, UK companies using AI in admirable, life-improving ways despair of the perceived doom-mongering tone in the lead-up to the summit, to which many have not been invited.
The suspicion must be, of course, that it is dismally easier for Sunak to say he wants to mitigate the risk of human extinction from AI than it is for him to take action to mitigate the risk of human extinction from, er, humans. And when it comes to actually regulating the AI-quake it seems wiser to put greater faith in the EU’s forthcoming AI Act than the declarations of a soon-to-be-outgoing UK prime minister desperate for global influence.
Still, I’m happy for political leaders to be jumpy about AI’s impact on warfare, cybercrime, child sexual abuse, labour market disruption and the trust-eroding proliferation of fakery – better that they are spooked than complacent.
I’m glad that Taoiseach Leo Varadkar has stressed the need for increased investment in digitalisation in light of AI advances. Perhaps the health service would be a good place to start. It sounds encouraging too that the Government’s AI advisory council has received an “overwhelming” number of applications from people keen to sit on it, according to Dara Calleary, the summit-bound Minister of State with responsibility for digital transformation.
Caution and enthusiasm are not mutually exclusive. Regulation won’t render the benefits of AI illusory. What it will do is make reckless power-grabs by Big Tech – including Microsoft-backed OpenAI – that bit trickier to pull off.
For journalists covering the ascent of AI there is a balance to be struck. It shouldn’t be the role of the media to stoke up hyperbole. But the end-could-be-nigh claims, some of which are emitting from experts within the tech industry itself, can’t be dismissed simply because they sound like science fiction.
Meanwhile, we’ve got our own problems. Large language models such as OpenAI’s ChatGPT and Google’s Bard, alongside a panoply of AI image generators, have pushed the wider cultural industries into a fight for which they remain unprepared.
“Co-pilot” is the jargon used by AI companies to assure us that robots, rather than coming for our jobs, will merely sit next to us in the cockpit. Evangelists favour the concept so much that when Microsoft unveiled an AI assistant last month it was called Copilot. Sadly, for anyone thinking of launching an AI tool called Captain as a dystopian joke, IBM already has a “decision management solution” by that name.
Selfishly I’d prefer it if writing wasn’t redefined as editing slabs of text authored by a dubiously programmed chatbot – a fate that seems implied by both the “co-pilot” buzzword and vague promises to keep a “human in the loop”.
When I hear of Silicon Valley companies hiring poets and other creative writers to help train their AI tools, half of me is relieved that at least this time they’re not stealing literature without permission. The other half of me recoils at the thought of the next-generation inspirational quotes that will soon be masquerading as priceless wisdom.
Honestly, if it turns out that we’re hurtling headlong towards an all-out AI apocalypse I’d prefer not to waste a single second ingesting the sort of deadening auto-assembly of words that makes those corporate Facebook posts someone used to write for Mark Zuckerberg seem like the apex of English literature.
Profit-chasing, sludge-loving employers high on cheap AI will try to convince us that AI is here to save us from drudgery. This is more nonsense. It’s nonsense because one person’s soulless drudgery is the next person’s essence of humanity – and skill. But mostly it’s nonsense for another undermentioned reason: technology’s habitual capacity to reinvent soulless drudgery along with everything else.