We should all be worried by deep fake technology

AI firms are rapidly improving their video quality with no consideration for its impact on society

OpenAI's Sora has proven to be an enormous hit and it’s easy to see why. Photograph: Gabby Jones/Bloomberg
OpenAI's Sora has proven to be an enormous hit and it’s easy to see why. Photograph: Gabby Jones/Bloomberg

It’s backstage of a wrestling arena and Michael Collins, with Wolfe Tone by his side, cuts a passionate promo against the British. Elsewhere, Tupac Shakur and Biggie Smalls enter a ring and stare each other down.

These are just two of the examples of viral clips in recent weeks generated using OpenAI’s Sora or another video creation artificial intelligence tool. Sora has proven to be an enormous hit and it’s easy to see why.

These videos, while still not exactly accurate representations of the people involved, have far less of the “uncanny valley” nature of previous deepfakes. In essence, they have high production values.

As has so often been the case with generative AI, the words of Ian Malcolm, Jeff Goldblum’s character in Jurassic Park, come to mind. These boffins have spent so much time preoccupied with whether they could that they didn’t stop to think whether they should.

READ MORE

Naturally, far more disturbing videos have been generated. On the milder end of the awful were ones of Chris Benoit. He was a wrestler who was also a double-murderer, killing his wife and son, before taking his own life. What’s truly eerie is how accurately the videos aged him forward.

Those horrific clips were the tip of the iceberg. OpenAI had to announce it would stop allowing the creation of videos featuring Martin Luther King because, to the surprise of nobody who stopped to think for even a moment, many of these were racist.

OpenAI has since said it will enable the estates of historical figures to opt-out of their likenesses being generated. This is a band-aid at best.

Bear in mind that in Ireland, and most western democracies, you can’t defame the dead. Once someone is dead, you’re rather free to say what you like about them or misrepresent them at will.

Anyone getting a good giggle at an AI Michael Collins cutting a fiery promo wouldn’t have reason to think of these implications. It’s a matter of awe before understanding, just like when the leads of Jurassic Park first see a Brachiosaurus.

An AI generated deepfake purports to show RTÉ reporting on presidential candidate Catherine Connolly ending her campaign.

The early examples from Sora that went viral were designed to generate wonder rather than scrutiny. The friction and cost involved in creating something like that is reduced with a tool like Sora. The user simply writes a text prompt and awaits the wonderful creation.

The safety layers OpenAI has reluctantly put in don’t inspire much confidence. The issue here goes beyond misrepresenting the dead to enabling a much higher scale of deepfake than ever before.

The issue of deepfakes tends to come up every few months, largely as technology advances to make them better. One viral one in Ireland a year or so ago was laughably bad. Real video from Virgin Media was used with deepfaked voices. This made Dubliner Martin King sound like he was an over the top character in Downton Abbey and Ciara Doherty, from Donegal, sounded American. The nonsense being spewed by the fake voiceover fooled nobody.

With the development and proliferation of Sora, we’re now perilously close to a level of deepfake technology that can mimic real people. Moreover, these deepfakes can be produced and proliferated far faster than anything to rebut them.

Sora doesn’t need to even be misused all that often to have enormous consequences. There’s more than enough material out there for convincing deepfake portrayals of the likes of Ukrainian president Volodymyr Zelenskiy, Federal Reserve chairman Jerome Powell or the World Health Organisation’s Dr Mike Ryan to cause chaos with geopolitics, markets and healthcare provision globally.

Meta removes deepfake AI video depicting Catherine Connolly quitting presidential raceOpens in new window ]

It’s all quite scary and that is unfortunate. The idea of a tool to aid the creative process and entertain is broadly good, even if the ethics of this particular tool with regard to artists are questionable.

Moral questions unfortunately rarely survive contact with market opportunity, unless regulations are put in place to account for them. Sora is a tool to create content at scale, with more convincing imagery and audio than ever before. It sounds wondrous and the public has tasted the spectacle. It wants more.

This, of course, makes life more challenging for governments and regulators to put some real guard rails in place. The issues keep on piling up when it comes to accounting for the potential of generative AI.

‘Deep fake’ image of David McWilliams used to dupe social media users, says commentatorOpens in new window ]

At present regulators are still working to try to provide some kind of real protection to artists for the work they create. Now the inbox has a visibly enormous issue put on top of that. While the likes of the European Commission have been faster to act with generative AI than prior technological issues, even greater alacrity seems to be required.

Sora is not going to be the last of these enormous developments that go viral. Anyone who knows what that will be is likely going to be very rich in short order. With Sora and whatever comes next, we ought to remember that as we keep building wonder upon wonder, we really ought not to be surprised when chaos follows.