Login
Currencies     Stocks

AI systems are far better than people at spotting deepfake images, but when it comes to deepfake videos, humans may still have the edge. That’s the surprising twist from a new study that pits people against machines in the race to detect digital forgeries. The results suggest humans and machines will need to work together to identify and combat deepfakes going forward, psychologist Natalie Ebner and colleagues report January 7 in Cognitive Research: Principles and Implications.

Deepfakes are AI-generated images, audio and videos that can falsely represent what a person looks like, says or does and have already been used to commit financial fraud, influence elections and ruin reputations. They are becoming more convincing at an alarming rate, fooling humans and AI models alike.

To determine whether humans or machines were better at deepfake detection, Ebner and her colleagues first asked about 2,200 participants and two machine learning algorithms to rate the realness of 200 faces on a scale from 1 (fake) to 10 (real). Humans were able to spot deepfakes only at chance level, or about 50 percent of the time. But the machines performed better, with one algorithm getting the correct answer roughly 97 percent of the time and the other averaging 79 percent accuracy.

Next, the researchers asked about 1,900 human participants to watch 70 short videos of a person discussing a topic and then to rate how realistic the person’s face was. In a surprising twist, humans outperformed the algorithms in this task. Human participants got the right answer an average of 63 percent of the time while the algorithms performed at around chance level.

The researchers are now taking a deeper look at both human and AI decision-making. We want to know “what is the machine using, for it to be so much better under some conditions than the human? And how is it different from how the human reasons? What are we seeing in the brain that the human is becoming aware of and picking up on?” says Ebner, of the University of Florida in Gainesville. “We’re looking at all these different angles now in the human and in the machine to not just describe ‘yes’ or ‘no’ but to understand why are they coming to the yes and the no.”

That knowledge, the team argues, will help humans figure out how best to collaborate with AI to navigate our deepfake-saturated future.

Aaron Brooks is a freelance writer and editor based in Traverse City, Mich.


Read the full article here
Share.
Leave A Reply

Exit mobile version