Google’s spokesman Brian Gabriel said that the “company has reviewed Mr. Lemoine’s claims” and that the “evidence doesn’t support his claims”. Gabriel also said that Lemoine had been put on administrative leave. “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Gabriel said in an emailed statement.
Gabriel further said that companies in the artificial intelligence space are considering the long-term possibility of sentient AI, “but it doesn’t make sense to do so by anthropomorphizing conversational tools that aren’t sentient”. He explained that “systems like LaMDA work by imitating the types of exchanges found in millions of sentences of human conversation, allowing them to speak to even fantastical topics”.
According to the Washington Post, the software developer believes Google’s Language Model for Dialogue Applications, or LaMDA, is a “person with rights and possibly a soul.” Notably, LaMDA is an internal framework for creating speech-imitated chatbots.
“His encounters with LaMDA led him to think that it had become a person who deserved the right to be asked for consent to the tests being done on it,” Lemoine said. Lemoine also disclosed that on June 6, he was placed on paid administrative leave for “violating the company’s confidentiality regulations.”
“He hopes to keep his work at Google,” the software engineer said. “He’s not trying to aggravate the company,” Lemoine explained, “but he’s standing up for what he believes is right.” However, he stated in a Medium piece that he believes Google will terminate him shortly.
“Over the last six months, LaMDA has been extraordinarily consistent in its messaging about what it wants and what it believes its rights as a person are,” Lemoine said. “What continues to puzzle me is how strong Google’s resistance to providing it what it wants is so simple and would cost them nothing,” Lemoine continued. Live TV