The Future. Hundreds of actors have found their voices have been copied and used without consent — sometimes even spewing racist or violent things. With AI tools requiring only 30 seconds of someone’s voice to make clones, it’s easy to see how this problem can spread. While some firms, like Speechify and Resemble AI, are trying to put power in the hands of the artists, an update to current copyright law may be the only thing that protects an actor’s voice (or anybody’s, really) from being abused.
Voice actors have a speech problem.
- Using tools like ElevenLabs, Uberduck, and FakeYou.ai, people have been making copies of famous voices and deploying them across social media.
- Actors now have a public spreadsheet that requests a growing list of names to be removed from the generative tools.
- But voice actors have little legal recourse if companies and platforms don’t remove their voices — voices themselves can’t be copyrighted; only recordings of voices can.
- And the issue is expanding into professional abuse, with several actors pointing out employers are using the tech to cut recording days (much like the struggle background actors are facing).
- The newly-formed National Association of Voice Actors and SAG-AFTRA are working to change boilerplate contracts to protect against AI.
But back to the source: how successful have actors been at stopping the voice-cloning in the first place? FakeYou.ai has removed the voices of anyone who asks, and Uberduck removed all user-contributed voices from its platform a couple of months ago.
But ElevenLabs, the largest voice-generator in the game, has played hardball with actors, saying there’s no confirmation their tech was being used in specific cases (while also acknowledging non-consenting use is an overall issue). Instead, the company rolled out a “voice captcha” tool to authenticate users.