The future of AI search is up to the Supreme Court
The Future. Next week, the Supreme Court will hear arguments from the case Gonzalez v. Google. The decision will determine whether companies like Google or Microsoft can be held legally responsible for misleading content generated by AI search models like Bard and Bing. If so, AI search could be practically stopped in its tracks.
Their words, not mine
On its face, Gonzalez v. Google is about whether YouTube can be sued for hosting accounts used by terrorists — but the ruling will answer a much broader question about legal responsibility online.
- Courts may re-evaluate Section 230, a law that protects people and platforms from legal responsibility for hosting or sharing objectionable content.
- At issue is whether a web service could lose those protections by promoting or highlighting objectionable user-generated content.
- If so, content platforms like YouTube and Meta wouldn’t be the only services affected. Courts might decide that AI search programs are also responsible for the information they summarize, even though it comes from somewhere else.
Whose line is it, anyway?
Restricting Section 230 freedoms wouldn’t necessarily kill AI search. Plaintiffs would still have to prove that a certain falsehood constitutes defamation, which requires proof of harm. Additionally, no one has legally ruled on the difference between conventional search results and results provided in response to leading questions.
Regardless, this ruling has the power to shape the applications of AI search (and generative AI) for years or even decades. The Supreme Court basically has the choice of making AI search either profitable or responsible — let’s hope they make the right one.
TOGETHER WITH CANVA
No design skills needed! 🪄✨
Canva Pro is the design software that makes design simple, convenient, and reliable. Create what you need in no time! Jam-packed with time-saving tools that make anyone look like a professional designer.