People are too trusting of deepfakes
Future. Bad news for humanity— not only are we bad at recognizing A.I.-generated faces; we find them to be more trustworthy. With deepfakes now making their way into world affairs, we may all be questioning what is real and what is fake… creating an opportunity for entrepreneurial developers to create mainstream software that can decipher real faces from fake ones.
How good are we really at differentiating between man and machine? A recent study published in the journal PNAS found that most people are terrible at it.
- In a first experiment, only 48.2% respondents were able to accurately tell the difference between a real face and an A.I.-generated one.
- A second experiment — in which new participants were given some feedback as they went through the choices — only brought the accuracy rate up to 59%.
- In a third experiment, that asked users to rate the faces on a scale “perceived trustworthiness,” users rated fake faces more trusting than real ones. Yikes.
The researchers, Sophie J. Nightingale at Lancaster University in the UK and Hany Farid at University of California, Berkeley, concluded that “synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.”
Those conclusions are terrifying when you realize how deepfakes are already being used.
- The new president of South Korea won the support of young people by releasing dozens of A.I.-generated videos.
- A (not great) deepfake of Ukrainian president Volodymyr Zelensky surrendering to Russian forces started to go viral on Twitter before it was taken down.
- And these (really good) deepfakes of Tom Cruise could fool just about anyone.
The next phase of online misinformation may become constantly verifying the veracity of every video we see.