Would Google’s LaMDA say Happy the elephant is a person?
- The recent court case centring on an elephant in a New York zoo shows the difficulty facing those pushing to have animals recognised as legal people with rights
- There seems to be more interest in whether artificial intelligence should be given rights once it passes the threshold of sentience
The elephant, named Happy, was taken from the wild in Thailand when she was a year old and held in captivity for over 40 years in a two-acre (0.8 hectare) enclosure, which is a tiny living space for an elephant. In contrast to the Bronx Zoo’s announcement that it would end their elephant exhibit if they had only one elephant left, it kept Happy living alone for the last 16 years.
This court case could have caused an overdue shift in the relationship between human animals and non-human animals. This elephant would no longer be a thing but a legal person with rights.
The common law concept of habeas corpus – which in Latin means “you have the body” – has been used to let a court decide whether a person was lawfully imprisoned. It was used innovatively in 1772, when Lord Mansfield accepted the use of habeas corpus to free James Somerset, an enslaved person, marking a watershed moment in the abolition of slavery.
On June 14, judges on New York’s highest court voted 5-2 and rejected that Happy was unlawfully confined at the Bronx Zoo. Writing in dissent, Judge Rowan Wilson said: “When the majority answers, ‘No, animals cannot have rights,’ I worry for that animal, but I worry even more greatly about how that answer denies and denigrates the human capacity for understanding, empathy and compassion.”
There seems to be more interest in whether artificial intelligence should be given rights once it crosses the threshold of sentience. Blake Lemoine, a Google software engineer, claimed the AI chatbot LaMDA had become sentient and had a soul.
Rene Descartes wrote that non-human animals are automata, merely machines without reason, mind, thought or pain and their souls, if they have any, are not immortal unlike the souls of human animals. In contrast, Jeremy Bentham wrote: “The question is not can they reason, nor can they talk, but can they suffer? Why should the law refuse its protection to any sensitive being?”
If an AI starts to make decisions that are more intelligent than human animals, it might immediately give personhood to non-human animals. Paradoxically, sentient AI might be capable of making a more sensible distinction between a legal person and a thing.
If they cannot, our own future as sentient beings will be bleak. Let us be a good example and emancipate our non-human animal brothers and sisters ourselves before the dawn of the singularity.
Danny Friedmann is assistant professor of law at Peking University School of Transnational Law in Shenzhen