Researchers experiment with how to hide your voice to protect your privacy
Future. Privacy advocates and researchers want to save your voice from fraud and non-consensual voice and medical monitoring, especially as Siri, Alexa, and other A.I. voice-assistants become commonplace and platforms, such as TikTok, collect user’s “voiceprints.” With “Big Voice” on track to be a $20 billion biz in the next few years, the ability to control and protect your voice may be what keeps it unique.
Quiet your data
According to Wired, Natalia Tomashenko, a researcher at Avignon University says that privacy advocates and technologies are working on ways to protect your voice.
- Obfuscation: The process of completely hiding who the speaker is through voice-changing technology or using A.I. to create a new voice.
- Distributed and federated learning: Software can recognize speech being spoken, but the data of your unique voice never leaves the device you’re speaking to.
- Anonymization: Allowing for your human voice to still flow through the system but stripping it of information that could make you recognizable — like replacing sensitive words or shifting your pitch.
- Legal protection: Europe’s General Data Protection Regulation covers voice biometrics, and in the U.S., the state of Illinois is leading the charge in inspecting cases involving voice-privacy violations.
With A.I. software able to determine your “age, gender, ethnicity, socio-economic status, health conditions, and beyond” by simply analyzing your voice, your voice data falling into the wrong hands is a little like opening your wallet and sharing your passwords with a criminal all at once.