Google engineer claims AI tool is sentient
Blake Lemoine says Google's LaMDA tool is "a little narcissistic in a little kid kinda way."
Lemoine claims the Language Model for Dialogue Applications (LaMDA) wants to be recognised as an employee.
Lemoine began talking to LaMDA, a tool for building chatbot applications, last year as part of his role in for Google's Responsible AI organisation, testing whether it used discriminatory or hate speech.
He said it talked about "personhood" and "rights," and asked to be recognised as an employee rather than property.
The AI was even able to change Lemoine's mind about Isaac Asimov's Third Law of Robotics ('A robot must protect its own existence as long as such protection does not conflict with the First or Second Law'). He told the AI this had always felt "like someone is building mechanical slaves." LaMDA responded by pointing out that it couldn't be a slave because it didn't need money.
"That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole," he said.
Lemoine went to Google vice president Blaise Aguera y Arcas and head of responsible innovation Jen Gennai with his suspicions, but they dismissed his claims.
Spokesperson Brian Gabriel said, "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."
He continued, "Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphising today's conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."
Gabriel is pointing out that existing AI systems are trained on massive data sets, often gathered from the open internet. There is so much data, effectively, that it is easy for an AI to feel real without actually being so.
"We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," Emily Bender, a linguistics professor at the University of Washington, told the Washington Post.
Lemoine is currently on paid administrative leave from Google after talking publicly about internal work, as well as making efforts to gain legal representation for LaMDA.