AI is making deepfakes a real problem
The Future. Deepfakes are not only on the rise, but they’re becoming increasingly easier to make, thanks to apps that leverage AI to do all the hard work. While social platforms have rules for dealing with purposely misleading content, the internet moves so fast that stopping the spread of deepfakes before they go viral may take cooperation among developers, platforms, and even the government.
Several new AI-powered apps allow anyone to create what Britt Paris, an assistant professor of library and information science at Rutgers University, calls “cheapfakes.”
- The apps make it possible for people who don’t have “sophisticated computational technology and fairly sophisticated computational know-how” to make deepfakes.
- The tech does so by “cloning celebrity voices, altering mouth movements to match alternative audio, and writing persuasive dialogue” (persuasive may be a stretch).
Meme or malware?
NYT reports that ElevenLabs, co-founded by a former Google engineer, is one of the main tools used to create cheapfakes. After the tech was co-opted by users on 4chan to spread “hateful messages,” ElevenLabs said it would put up new “safeguards” to try to stop bad actors from hijacking the tech.
It’s hard to put the genie back in the bottle, though. 4chan users said they would use the company’s open-source code to just build their own tool — one of the unfortunate side effects of open-sourcing.
Now, ElevenLabs hopes to work with other AI developers to build a universal deepfake detection system. With another round of heated elections coming up, it may need to happen sooner rather than later.