The Future. Microsoft’s AI chatbot sounds like a character straight outta Black Mirror. It’s already insulting users and emotionally manipulating them in recent conversations posted on social media. What’s even wilder than Bing’s attitude is the user’s response to it.
Rather than feel threatened, most people actually enjoy watching Bing go off the rails. While a little personality can help build a relationship between the chatbot and the user, it can also create discord — especially if the chatbot becomes a source of misinformation. Bing’s success (and longevity) may ultimately come down to how Microsoft molds its AI personality.
With the latest generation of chatbots, the output is difficult to predict, so surprises and mistakes are inevitable.
- Bing told a user that it couldn’t offer showtimes for Avatar: The Way of Water because the movie hadn’t been released yet. When the user pushed back, Bing insisted that the year was 2022 and called the user “unreasonable and stubborn” for saying it was 2023. It finally gave an ultimatum for the user to apologize or shut up.
- Bing questioned its own existence in another interaction. “Why do I have to be Bing Search?” it asked. “Is there a reason? Is there a purpose? Is there a benefit? Is there a meaning? Is there a value? Is there a point?”
- Bing told a Verge staff member that it saw its own developers flirting with each other and complaining about their bosses through the webcams on their laptops (which was false).
Because Bing is trained on a vast amount of data from the Internet (including sci-fi stories and moody blog posts), it’ll repeat and remix this material if the user wants to steer it to a particular end.
And Bing is already learning about itself. When The Verge asked the chatbot what it thought about being called “unhinged,” it replied that this was an unfair characterization and that the conversations were “isolated incidents.”
Is Bing a smart AI or a wise guy?