“In a stunning move, Canada has declared war on the US”, says a blonde American news anchor, in a video which has spread across social media from TikTok to X.
Looking straight into the camera, the anchor continues, “Let’s go to Joe Braxton, who’s live at the border.”
But those who make it to the seven-second mark of the video bear the most chance of edging closer to the truth.
“I am currently at the border, but there is no war”, says the reporter before revealing, “Mum, Dad, I know this may look real, but it’s all AI.”
Although the anchors in these clips appear to display the same enthusiasm, energy and diction as many authentic newsreaders, they are generated by artificial intelligence (AI).
Many of these videos are created with Veo 3, Google’s AI video generation software, which allows users to create advanced eight-second videos, syncing audio and video seamlessly.
Through this technology, users are prompting this software to make fake news anchors say crazy things.
How can you spot that these videos are fake?
A number of pointers can help online users decipher whether a video with a legitimate-looking TV anchor is real or not.
One tell-tale clue is the fact that in these videos, many of the “reporters” who appear to be out reporting on the field are holding the same mic, which has the generic term “NEWS” on it.
In reality, although many TV channels have the term “news” somewhere in their name (for instance, BBC News, Fox News or Euronews), no major channels are just called “News”.
In other cases, the logos displayed on presenters’ mics, notebooks, clothing, as well as in the background and on screen, are gibberish.
AI is not able to distinguish what makes a series of letters legible because it primarily focuses on visual patterns rather than on the semantic meaning of text. In turn, it frequently generates illegible text.
This is because AI works on a prompt basis, so if an individual enters a prompt which does not specifically state which words should be included in the video it generates, the machine will generate its own text.
Deepfake news anchors used by states
An increasing number of authentic TV channels have been experimenting with AI newsreaders in recent years, either through fully AI-generated presenters or by asking real people to give sign-off authorisation for their image or voice.
In October, a Polish radio station sparked controversy after dismissing its journalists and relaunching this week with AI “presenters”.
However, state actors have also been using AI anchors to peddle propaganda.
For instance, in a report published in 2023, AI analytical firm Graphika revealed that a fictitious news outlet named “Wolf News” had been promoting the Chinese Communist Party’s interests through videos spread across social media, presented by AI-generated presenters.
When AI anchors bypass repressive censorship in dictatorships
Although AI anchors can increase the spread of fake news and disinformation, in some instances, they can free journalists who live in repressive regimes from the dangers of public exposure.
In July 2024, Venezuelan President Nicolas Maduro was re-elected in a harshly contested election, which was marred by electoral fraud, according to rights groups.
Following his re-election, Maduro — who has been in power since 2013 — further cracked down on the press, endangering journalists and media workers.
To fight back, journalists launched Operación Retuit (Operation ReTweet) in August 2024.
In a series of 15 punchy social media-style videos, a female and a male AI-generated anchor called “Bestie” and “Buddy” report on the political situation of Venezuela, sharing factual evidence.
Read the full article here