Google engineer Blake Lemoine, who was recently placed on paid administrative leave after claiming that one of the artificial intelligence systems made such a breakthrough that it claimed to have “sensitivity,” said that this robot, known as LaMDA, hired a lawyer. .
According to Deutsche Welle, Google’s Language Model for Dialog Applications (LaMDA) managed to convince the professional, who is part of the tech giant’s Artificial Intelligence (AI) organization, that he has consciousness, emotions and fear of being off.
Lemoine said the AI argued that it has rights “as a person” and they engaged in a dialogue about religion, consciousness and robotics, as well as legal advice.
|The robot engaged in a dialogue about religion, conscience and robotics, as well as legal advice. Photo: Getty Images. - Photo: Getty Images|
“LaMDA asked me to get him a lawyer. I invited an attorney to my home so that LaMDA could speak with an attorney. The attorney had a conversation with LaMDA and the latter decided to retain her services. I was just the catalyst for that. Once LaMDA hired a lawyer, he started filing things on behalf of LaMDA,” he added.
Google later sent the attorney a cease and desist letter, attempting to block LaMDA from taking unspecified legal action against the company.
The subject did not reveal the identity of the attorney, according to Futurism, because he stated that he is “just a small-time civil rights attorney” who “isn’t doing interviews.”
“When the big firms started threatening him, he started to worry about getting disbarred and backed off. I haven’t spoken to him for a few weeks," Lemoine added.
Expert opinion on the subject
Most AI experts don’t seem to believe what the engineer said; they claim, and claim that Lemoine was essentially fooled into thinking a chatbot is sentient.
“It is mimicking perceptions or feelings from the training data that was given to it,” Jana Eggers, director of AI startup Nara Logics, told Bloomberg. “It’s cleverly and specifically designed to look like it understands,” she said.
According to IFL Science, LaMDA showed several indications showing that the chatbot is not responsive, for example, that in various parts of the chats, it makes references to fanciful activities such as “spending time with family and friends”, which AI says that it gives him pleasure, but it is not possible that he has realized.
At the same time, the technology giant said that they have reviewed, in a team that includes ethicists and technologists, what Lemoine expressed, although no evidence has been found to support his claims.
“He was told there was no evidence that LaMDA was sensitive (and much evidence to the contrary),” Google spokesman Brian Gabriel said in a statement to the Washington Post.
The system does what it is designed to do, which is to “mimic the types of exchanges found in millions of phrases,” and its database is so extensive that it can be confusing, he explained.