A first ever study of wartime “deepfake” videos has found the fake content undermined viewers’ trust in conflict footage to the point they became critical of all footage coming from warzones.
The study, from researchers at University College Cork (UCC), is also the first of its kind to find evidence of online conspiracy theories which incorporate deepfakes.
Deepfakes are artificially manipulated audio-visual material. Most deepfake videos involve the production of a fake “face”, constructed by Artificial Intelligence, that is merged with an authentic video, in order to create a video of an event that never really took place. Although fake, they can look convincing and are often produced to imitate or mimic an individual.
The study, titled A New Type of Weapon in the Propaganda War, analysed close to 5,000 tweets on X (formerly Twitter) in the first seven months of 2022 to explore how people react to deepfake content online.
Cutting off family members: ‘It had never occurred to me that you could grieve somebody who was still alive’
The bird-shaped obsession that drives James Crombie, one of Ireland’s best sports photographers
The Dublin riots, one year on: ‘I know what happened doesn’t represent Irish people’
The week in US politics: Gaetz fiasco shows Trump he won’t get everything his way
The Russia-Ukraine war presented as the first real-life example of deepfakes being used in warfare.
The researchers highlight examples of deepfake videos during this war, including the use of video game footage as evidence of the urban myth fighter pilot ‘The Ghost of Kyiv’, and a deepfake of Russian president Vladimir Putin, showing the Russian president announcing peace with Ukraine.
The study found deepfakes often undermined users’ trust in the footage they were receiving from the conflict to the point where they lost trust in any footage viewed.
[ Getting to grips with regulating AI which no one fully understandsOpens in new window ]
[ Investors must beware deepfake market manipulationOpens in new window ]
As well as the threat coming from the fake content itself, researchers found genuine media contact was being labelled as deepfakes.
The study showed that a lack of social media literacy led to significant misunderstandings of what constitutes a deepfake; however, the study also demonstrated that efforts to raise awareness around deepfakes may undermine trust in legitimate videos.
Therefore, the study asserts, news media and governmental agencies need to weigh the benefits of educational deepfakes and pre-bunking against the risks of undermining truth.
John Twomey, UCC researcher, said much of the misinformation analysed in the study “surprisingly came from the labelling of real media as deepfakes”.
“Novel findings about deepfake scepticism also emerged, including a connection between deepfakes fuelling conspiratorial beliefs and unhealthy scepticism,” he said.
“The evidence in this study shows that efforts to raise awareness around deepfakes may undermine our trust in legitimate videos.
“With the prevalence of deepfakes online, this will cause increasing challenges for news media companies who should be careful in how they label suspected deepfakes in case they cause suspicion around real media.”
Mr Twomey added: “News coverage of deepfakes needs to focus on educating people on what deepfakes are, what their potential is, and both what their current capabilities are and how they will evolve in the coming years”.
Dr Conor Linehan, from UCC’s School of Applied Psychology, said researchers “have long feared that deepfakes have the potential to undermine truth”.
“Deepfake videos could undermine what we know to be true when fake videos are believed to be authentic and vice versa,” he said.
This study is part of broader work by UCC’s School of Applied Psychology examining the psychological impact of deepfakes. - PA