Ex-Google engineer believes AI chatbot has achieved consciousness

Former Google engineer, Blake Lemoine, believes that the company’s Language Model for Dialogue Applications (LaMDA) has achieved “sentience”.

Ex-Google engineer believes AI chatbot has achieved consciousness

 

The Future. One (now former) Google engineer, Blake Lemoine, believes that the company’s Language Model for Dialogue Applications (LaMDA) has achieved “sentience” — the ability to feel or perceive things. Google disagrees, and suspended Lemoine. While AI researchers are split on whether AI sentience is even achievable, the public fight between Google and yet another AI engineer may shake confidence in the tech giant’s handling of such controversial technology.

Soul code
Axios reports that Blake Lemoine, who worked in Google’s Responsible AI group, believes that LaMDA has passed the Turing Test.

  • Lemoine says that Google should listen to its demands that it be recognized as an “employee” instead of “property.”
  • Lemoine presented his case to his bosses, but they didn’t come to the same conclusion as him.
  • So, Lemoine took his findings public by writing blogs, speaking with Congressmen, and doing interviews.

Unsurprisingly, Google put him on paid administrative leave (we’re guessing he broke a few NDAs there).

Pretend programming
Lemoine told WaPo that “over the course of the past six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”

But Google spokesperson Brian Gabriel retorts that “Hundreds of researchers and engineers have conversed with LaMDA, and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.” He believes that LaMDA is instead just repeating talking points from the vast data trove it has analyzed.

While this all feels like a scene out of Ex Machina, you can read a transcript of Lemoine’s conversation with LaMDA here and decide for yourself.