Google debate over sentient AI overshadows more pressing issues like prejudice and exploitation, researchers say
- The debate recently ignited by a Google engineer distracts from issues like racial bias, researchers say, which the tech giant’s AI has struggled with before
- Google suspended the software engineer for revealing information about its AI system that the worker said was sentient, which the AI community disagrees with

The engineer, Blake Lemoine, said be believed that Google’s AI chatbot was capable of expressing human emotion, and that the company would need to address the resulting ethical ramifications. Google put him on leave for sharing confidential information and said his concerns had no basis in fact – a view widely held in the AI community. What’s more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the tech.
Lemoine’s stance may also make it easier for tech companies to abdicate responsibility for AI-driven decisions, said Emily Bender, a professor of computational linguistics at the University of Washington. “Lots of effort has been put into this sideshow,” she said. “The problem is, the more this technology gets sold as artificial intelligence – let alone something sentient – the more people are willing to go along with AI systems” that can cause real-world harm.
Bender pointed to examples in job hiring and grading students, which can carry embedded prejudice depending on what data sets were used to train the AI. If the focus is on the system’s apparent sentience, Bender said, it creates a distance from the AI creators’ direct responsibility for any flaws or biases in the programs.
The Washington Post on Saturday ran an interview with Lemoine, who conversed with an AI system called LaMDA, or Language Models for Dialogue Applications, a framework that Google uses to build specialised chatbots. The system has been trained on trillions of words from the internet in order to mimic human conversation. In his conversation with the chatbot, Lemoine said he concluded that the AI was a sentient being that should have its own rights. He said the feeling was not scientific, but religious: “who am I to tell God where he can and can’t put souls?” he said on Twitter.
Alphabet Inc’s Google employees were largely silent in internal channels besides Memegen, where Google employees shared a few bland memes, according to a person familiar with the matter. But throughout the weekend and on Monday, researchers pushed back on the notion that the AI was truly sentient, saying the evidence only indicated a highly capable system of human mimicry, not sentience itself. “It is mimicking perceptions or feelings from the training data it was given – smartly and specifically designed to seem like it understands,” said Jana Eggers, the chief executive officer of the AI start-up Nara Logics.
The architecture of LaMDA “simply doesn’t support some key capabilities of human-like consciousness,” said Max Kreminski, a researcher at the University of California, Santa Cruz, who studies computational media. If LaMDA is like other large language models, he said, it wouldn’t learn from its interactions with human users because “the neural network weights of the deployed model are frozen”. It would also have no other form of long-term storage that it could write information to, meaning it wouldn’t be able to “think” in the background.