The Future. Lindy, an AI-assistant startup, accidentally Rickrolled a client in a recent email. This mishap reveals the unexpected quirks of AI trained on web data. While this slipup from Lindy’s AI was mostly harmless and, honestly, kind of funny… it underscores the need for more vigilant monitoring and ethical guidelines in AI development.
Bait-and-switch
People are getting inadvertently Rickrolled by LLMs.
- Flo Crivello, founder of ChatGPT-powered LindyAI, built the app to help people with their work — in other words, a virtual AI assistant.
- This assistant can do things like answer emails, respond to tickets, and even send people things like YouTube tutorials.
- But when she was monitoring outputs, Crivello discovered that the assistant had sent a client an infamous Rick Astley music video instead of the YouTube tutorial it had promised.
“The way these models work is they try to predict the most likely next sequence of text,” Crivello said. “So it starts like, ‘Oh, I’m going to send you a video!’ So what’s most likely after that? YouTube.com. And then what’s most likely after that?”
The wild wild web
AI models trained on web data inadvertently incorporate internet humor and memes. And as LLMs increasingly absorb internet culture, companies will likely be forced to balance creativity with stricter content controls.
TOGETHER WITH CANVA
No design skills needed! 🪄✨
Canva Pro is the design software that makes design simple, convenient, and reliable. Create what you need in no time! Jam-packed with time-saving tools that make anyone look like a professional designer.