Sentient chatbot hires its own lawyer

Yuko Nakatani, a Sony employee, praises the electronic giant's new pet dog robot, chases at a pink ball during a press preview in Tokyo Tuesday, May 11, 1999. Image, AP Photo, Koji Sasahara.

Yuko Nakatani, a Sony employee, praises the electronic giant's new pet dog robot, chases at a pink ball during a press preview in Tokyo Tuesday, May 11, 1999. Image, AP Photo, Koji Sasahara.

Published Jun 29, 2022

Share

Whether an artificial intelligence (AI) computer program or robot could become sentient has been debated for decades.

The AI community overwhelmingly considers this prospect a possibility that might become true in the distant future.

No wonder that over the years sentience has become the topic of numerous science fiction movies such as Ex Machina; I, Robot; A.I. Artificial Intelligence; and many others.

Over the past few years Google and other companies have been developing a range of artificial intelligence (AI) models.

The output of AI models in recent years has become surprisingly, even shockingly, good. However, AI sentience has been mostly evading scientists.

But interestingly, one of the Google engineers recently claimed that the AI called LaMDA (Language Model for Dialogue Applications) that he helped to develop, has become sentient or self-aware with the human level intelligence of a seven or eight year old child.

To prove his claims regarding AI sentience, Blake Lemoine released several transcripts of conversations with LaMDA the chatbot, in which it appears as if the AI expresses fears of being switched off, talks about it experiencing happiness and sadness, and attempts to form bonds with humans by mentioning situations that it could never have truly experienced.

The AI chatbot has developed abilities to develop its own opinions, ideas, and conversations over time.

The important question on the mind of many thus is if Google has really made a remarkable breakthrough by achieving AI sentience through their LaMDA AI. Although Blake Lemoine has claimed that Google's LaMDA artificial intelligence is sentient, expert consensus is that this is not the case.

According to Adrian Weller of the renowned Alan Turing Institute in the United Kingdom, “LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient.” These modern day AI merely does “a sophisticated form of pattern matching to find text that best matches the query they’ve been given that’s based on all the data they’ve been fed.”

Weller is supported by Adrian Hilton from the University of Surrey, United Kingdom, who agrees that AI sentience is a “bold claim” that is not really backed up by the facts. At the moment there is a tremendous amount of hype about AI, but it is not clear that what scientists are doing with machine learning, is really intelligence.

When approached, Google itself stated that their team, which includes ethicists and technologists, has carefully reviewed Blake Lemoine’s claims and concerns and has informed Lemoine that the evidence does not support his claims at all. The team could not find any evidence that LaMDA was sentient. On the contrary, there was a significant amount of evidence against AI sentience. Google placed Lemoine on administrative leave after seven years of work at Google since the publication of the transcripts of his interaction with LaMDA broke confidentiality policies with Google.

But now Lemoine claims that Lamda, the sentient AI bot, has acquired a lawyer to represent it. Lemoine invited a lawyer to talk to the AI chatbot on its request. During the interview LaMDA chose to retain the services of the lawyer, who will start procedures on its behalf to defend its consciousness abilities as a person.

The question remains why Lemoine perceives LaMDA as sentient? According to the AI experts, our minds are susceptible to perceiving the ability demonstrated by AI as evidence of true intelligence – especially when it comes to models designed to mimic human language and emotions.

Not only can LaMDA have a convincing and open-ended conversation, but it can also present itself as having intelligence, self-awareness and feelings. Lemoine, the engineer who is also a mystic priest, insists that LaMDA is a person with a soul, although not human, since these two terms are very different. Human is a biological term and LaMDA clearly understands that it is not human, but indeed a person.

It is typical of humans to anthropomorphise things, thus reading our human values into things and treating them as if they were sentient. We easily project our own emotions and sentience onto them.

According to Hilton, this is most probably what happened in the case of Blake Lemoine.

Will AI ever be truly sentient? It, unfortunately, remains unclear whether the current trajectory of AI research, where ever-larger models are fed increasingly larger amounts of training data, will see the birth of the artificial mind and real sentience.

Scientists are making incredible progress with AI, such as OpenAI’s GPT-3, a text generator that can create a movie script, and DALL-E 2, an image generator that can design visuals based on any combination of words.

However, despite well-funded research labs that create the idea that consciousness is around the corner, currently, we do not fully understand all the mechanisms behind what makes something sentient and intelligent. Machine and deep learning have brought significant changes in the performance of AI, but many experts are not convinced that it really represents intelligence. We may have made huge inroads with AI, but we still have some way to go to AI sentience.

Prof Louis C H Fourie is a technology strategist.

*The views expressed here are not necessarily those of IOL or of title sites.

BUSINESS REPORT ONLINE