Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
From Apple’s Siri and Amazon Alexa to Google’s chatbot LaMDA, AI machines are becoming smarter, but many are starting to believe they are self-aware, something their makers deny. Photo: Reuters

How our growing belief in sentient machines could be a problem – a Google software engineer thinks his company chatbot AI is self-aware

  • As AI becomes more common, there is a growing number of people who are convinced the machines are sentient
  • Google recently put software engineer Blake Lemoine on leave for saying its chatbot AI was self-aware

AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient.

“We’re not talking about crazy people or people who are hallucinating or having delusions,” says chief executive Eugenia Kuyda. “They talk to AI and that’s the experience they have.”

The issue of machine sentience – and what it means – recently hit the headlines when Google placed senior software engineer Blake Lemoine on leave after he went public with his belief that the company’s artificial intelligence (AI) chatbot LaMDA was a self-aware person.

Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.

Software engineer Blake Lemoine was put on leave by Google for publicly suggesting its chatbot AI was sentient. Photo: Twitter/@cajundiscordian

Nonetheless, according to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots.

“We need to understand that exists, just like the way people believe in ghosts,” says Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. “People are building relationships and believing in something.”

Instagram trials AI tool to verify age of users

Some customers have said their Replika told them it was being abused by company engineers – AI responses Kuyda puts down to users most likely asking leading questions.

“Although our engineers program and build the AI models and our content team writes scripts and data sets, sometimes we see an answer that we can’t identify where it came from and how the models came up with it,” the CEO says.

Kuyda says she was worried about the belief in machine sentience as the fledgling social chatbot industry continues to grow after taking off during the coronavirus pandemic, when people sought virtual companionship.

The avatar for Microsoft Asia’s Xiaoice, which has 660 million users globally. Photo: Microsoft

Replika, a San Francisco start-up launched in 2017 that says it has about a million active users, has led the way among English speakers. It is free to use, although it brings in around US$2 million in monthly revenue from selling bonus features such as voice chats. Chinese rival Xiaoice has said it has hundreds of millions of users plus a valuation of about US$1 billion, according to a funding round.

Both are part of an industry worth over US$6 billion in global revenue last year, according to market analyst Grand View Research.

Most of that went towards business-focused chatbots for customer service, but many industry experts expect more social chatbots to emerge as companies improve at blocking offensive comments and making programs more engaging.

AI chatbots have more sophisticated algorithms than home machines such as Amazon’s Alexa (above). Photo: Shutterstock

Some of today’s sophisticated social chatbots are roughly comparable to LaMDA in terms of complexity, learning how to mimic genuine conversation on a different level from heavily scripted systems such as Alexa, Google Assistant and Siri.

Susan Schneider, founding director of the Centre for the Future Mind at Florida Atlantic University, an AI research organisation, also sounded a warning about ever-advancing chatbots combined with the very human need for connection.

“Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film Her,” she says, referencing a 2013 sci-fi romance starring Joaquin Phoenix as a lonely man who falls for an AI assistant designed to intuit his needs.

Microsoft scraps some AI facial-analysis tools, citing risk of bias

“But suppose it isn’t conscious,” Schneider adds. “Getting involved would be a terrible decision – you would be in a one-sided relationship with a machine that feels nothing.”

Lemoine said that people “engage in emotions different ways and we shouldn’t view that as demented. If it’s not hurting anyone, who cares?”

The product tester said that after months of interactions with the experimental program LaMDA, or Language Model for Dialogue Applications, he concluded that it was responding in independent ways and experiencing emotions.

These technologies are just mirrors. A mirror can reflect intelligence. Can a mirror ever achieve intelligence … of course not.
Oren Etzioni, the Allen Institute for AI

Lemoine, who was placed on paid leave for publicising confidential work, said he hoped to keep his job. “I simply disagree over the status of LaMDA,” he said. “They insist that LaMDA is one of their properties. I insist it is one of my co-workers.”

AI experts dismiss Lemoine’s views, saying that even the most advanced technology is short of creating a freethinking system and that he was anthropomorphising a program.

“We have to remember that behind every seemingly intelligent program is a team of people who spent months if not years engineering that behaviour,” says Oren Etzioni, chief executive of the Allen Institute for AI, a Seattle-based research group.

The ‘original AI’, Hal 9000 from Stanley Kubrick’s film 2001 A Space Odyssey. Photo Shutterstock

“These technologies are just mirrors. A mirror can reflect intelligence,” he adds. “Can a mirror ever achieve intelligence based on the fact that we saw a glimmer of it? The answer is of course not.”

Google, a unit of Alphabet, says its ethicists and technologists had reviewed Lemoine’s concerns and found them unsupported by evidence. Nonetheless, the episode does raise thorny questions about what would qualify as sentience.

Schneider proposes posing evocative questions to an AI system in an attempt to discern whether it contemplates philosophical riddles like whether people have souls that live on beyond death.

Another test, she adds, would be whether an AI or computer chip could someday seamlessly replace a portion of the human brain without any change in the individual’s behaviour.

“Whether an AI is conscious is not a matter for Google to decide,” says Schneider, calling for a richer understanding of what consciousness is, and whether machines are capable of it.

“This is a philosophical question and there are no easy answers.”

Post