The Future. The ability to talk with ChatGPT in a humanlike voice, a feature released last month, comes with a big risk factor, according to the company — the possibility of developing an unhealthy emotional connection to the system. Considering that some users are hosting group chats or even dating chatbots via Character.AI and Replika, ChatGPT’s voice mode could force some companies to open up an HR division to deal with human-AI matters.
Emotional update
A warning in the “system card” for OpenAI’s GPT-4o system lays out the dangers of people using the anthropomorphic AI voice.
- Using ChatGPT’s humanlike voice interface could make some users emotionally attached to their chatbot, especially considering there have been allegations of the voices appearing flirty.
- It could also lead users to trust the chatbots more, even when “hallucinating” incorrect information — a situation complicated by ChatGPT’s growing ability to “persuade” people.
- Worse yet, it could ultimately make users less reliant on human relationships or interactions.
Joaquin Quiñonero Candela, head of preparedness at OpenAI, says that voice-enabled AI could help people who suffer from loneliness or need additional practice with social interactions. But, Iason Gabriel, a staff research scientist at Google DeepMind, told Wired that its use “creates this impression of genuine intimacy [that can lead] to troubling questions about emotional entanglement.”
In other words, human-AI relations are about to get even more complicated.
TOGETHER WITH CANVA
No design skills needed! 🪄✨
Canva Pro is the design software that makes design simple, convenient, and reliable. Create what you need in no time! Jam-packed with time-saving tools that make anyone look like a professional designer.