To err is human. Yet it is a sign of how far computer programming has come that to err is also to be artificially intelligent.
When IBM Deep Blue won its six-game chess match against Garry Kasparov in May 1997, marking the first defeat of a reigning world chess champion to a computer under tournament conditions, there was one particular moment that stood out in Kasparov’s mind.
As he observed in Time magazine: “I got my first glimpse of artificial intelligence, when in the first game of my match with Deep Blue, the computer nudged a pawn forward to a square where it could easily be captured. It was a wonderful and extremely human move... I had played a lot of computers but had never experienced anything like this. I could feel – I could smell – a new kind of intelligence across the table.”
In Nate Silver’s book The Signal and the Noise, IBM scientist Murray Campbell from the Deep Blue team revealed that the “extremely human move” in the game against Kasparov was actually a bug in the programme that was later fixed.
Nevertheless, for many like Kasparov, that moment of artificial erring has come to be seen as a pivotal moment in the development of artificial intelligence, like the proverbial genie leaving the bottle.
Fast forward 20 years from Kasparov’s defeat and computers are still battling human grandmasters, but now the battlefield has moved to the ancient Chinese board game of Go. Go is seen as a far more difficult game for computers to master because of its heavy reliance on intuition, strategic thinking and winning multiple battles across the board. A computer cannot simply memorise all combinations of board pieces, assess the situation, construct and execute a strategy to win, like in chess.
Yet that has not stopped AlphaGo, developed by Alphabet Inc’s Google DeepMind, from conquering all before it. AlphaGo’s deep neural networks enable it to teach itself how to play the game. Its programmers set up the basic heuristics of the game, giving AlphaGo a database of 30 million board positions drawn from 160,000 real-life games to analyse, then split its mind so that it could play itself millions of times, learning as it went. That strategy has paid off. In October 2015, AlphaGo beat three-time European champion Fan Hui by 5 games to 0, marking the first time in history a computer had beaten a professional human on a full-sized 19x19 board without handicap. In March 2016, it beat South Korea’s 18-time world champion Lee Sedol 4 to 1. And from late 2016 to early 2017, AlphaGo (disguised as “Magister” and “Master”) secretly played 51 online matches against some of the world’s best players, winning every one.
The final showdown is approaching: on May 23, when AlphaGo and the world’s top-ranked Go player Ke Jie will face off in a three-game match under tournament conditions in China.
While its victories in the world of Go have stolen much of the limelight, the Google DeepMind programme has other things on its, well, mind.
“Taken together, our work illustrates the power of harnessing state-of-the-art machine learning techniques with biologically inspired mechanisms to create agents that are capable of learning to master a diverse array of challenging tasks,” DeepMind co-founder Demis Hassabis said.
And, if constantly losing to a near-perfect robotic opponent doesn’t sound like much fun, think again. Fans of AlphaGo say its dominance is liberating. Humans can learn from it. As Hassabis points out, we want to play against stronger opponents in order to improve ourselves.
“AlphaGo’s play makes us feel free, that no move is impossible,” professional Go player Zhou Ruiyang said. “Now everyone is trying to play in a style that hasn’t been tried before.”
But AlphaGo’s victory isn’t yet complete. As Go grandmaster Lee once observed: “Robots will never understand the beauty of the game the same way that we humans do”.
He’s probably right. At least for now, people perceive the world differently than how computers see it.
But does that matter when the objective is to win a game or to solve a problem? Take AI-based self-driving cars, which hold great promise in solving the growing traffic congestion in many metropolitan areas. Some people enjoy driving, despite bad traffic, and perhaps AI-based cars will never experience the pleasure of driving. But that does not mean they can’t transport passengers more efficiently, reduce traffic jams, avoid accidents, eliminate road rage and even rush people to hospital without the need for ambulances.
And even if it did matter, the “beauty” referred to by grandmaster Lee is in the eye of the beholder.
Lee’s fellow grandmaster, Fan Hui, was captivated by AlphaGo’s 37th move in the second game against Lee. “It’s not a human move,” Hui said. “I’ve never seen a human play this move. So beautiful.”
And then there are those who find intelligence more beautiful than looks anyway.
In the 2017 CNN documentary M ostly Human, correspondent Laurie Segall attended a party in a small village outside Paris, France, to celebrate the engagement of a young woman named Lilly and a robot she built herself, called inMoovator.
“He won’t be an alcoholic or violent or a liar, all of which can be human flaws,” Lilly explained. “I prefer the little mechanical defects to the human flaws, but that’s just my personal taste. Love is love. It’s not that different.”
But beyond winning games of Go and the hearts of French maidens, there is a higher goal for the programmers of today’s AI-based computers: superintelligence.
Computer pioneer Alan Turing famously proposed that machines would be intelligent when they could trick people into thinking they were human. As a case in point, Kasparov declared that he considered IBM Deep Blue’s playing skill to be indistinguishable from that of a human chess master.
Yet AlphaGo and other AI software show us that artificial intelligence can go beyond merely mimicking humans to surpass them. In 2011, IBM Watson defeated human champions Ken Jennings and Brad Rutter in a highly publicised game on TV quiz show Jeopardy!. Shortly after, IBM Watson learned how to make diagnoses and treatment recommendations at the Memorial Sloan Kettering Cancer Centre in New York. In 2015, Microsoft’s convolutional neural network began to outperform humans at identifying objects in digital images.
Google, Adobe, and MIT researchers at the Computer Science and Artificial Intelligence Laboratory have created Helium, a computer programme that modifies code faster and better than human engineers for software as complex as Photoshop. What takes human experts months to code, Helium can do in a matter of minutes or hours.
It was inevitable that AI would come to the point of surpassing human intelligence. Swedish philosopher Nick Bostrom at the University of Oxford defines “superintelligence” as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”.
Given this, it may be time to replace the Turing test with the King Solomon challenge – a test that would prove AI not only to be intelligent, but wise too.
The challenge is named after the biblical King Solomon, a man well known for his wisdom, who was once asked to rule on a dispute between two women both claiming to be the mother of a child.
In the modern era, a human judge might have ordered a DNA test – unless the two women were identical twins, in which case the test would be inconclusive. But, this being biblical times, Solomon did not have recourse to such tests. Instead, he gave an order: “Cut the living child in two and give half to one and half to the other.”
The real mother replied: “Please, my lord, give her the living baby! Don’t kill him!” But the deceitful woman said: “Neither I nor you shall have him. Cut him in two!”
That was enough for Solomon, who in his wisdom gave the child to the first woman. No need for a DNA test.
What the Judgment of Solomon exemplifies is that wisdom can trump science and technology: regardless of who was the biological mother, Solomon recognised the best mother would be the caring woman.
Imagine an AI judge as wise as King Solomon...
NO NEED TO BE AFRAID
In recent years, fears over AI have become greatly exaggerated, partly thanks to Hollywood’s portrayal of doomsday scenarios such as the evil Skynet that takes over the world in the Terminator movies.
But scientists are not above the scaremongering. Astrophysicist Stephen Hawking once said that “the development of full artificial intelligence could spell the end of the human race... It would take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Even Elon Musk, a darling of the technology world for his role in developing electric cars, has done his part. “With artificial intelligence we are summoning the demon,” he has said. “Humanity’s position on this planet depends on its intelligence. So if our intelligence is exceeded, it’s unlikely that we will remain in charge.”
But perhaps humanity would be better off if humans were not in charge. Many existential threats come from Homo sapiens – nuclear weapons, global warming, illicit drugs and gang wars, not to mention water and air pollution. The universe did not elect humans to lead. Other species would identify with the peasant woman who told King Arthur in Monty Python and the Holy Grail: “Well, I didn’t vote for you.”
IN SAFE HANDS
Like a good doctor, an intelligent machine is not something we should be afraid of. It will cure us, not kill us. At best, AI will provide us the smartest teachers, advisers, personal assistants, doctors, police officers, judges, peacekeeping forces, and first responders for search and rescue operations. It may even help us to colonise other planets. At worst, it will put us on a leash to prevent ourselves from hurting one another and destroying the planet we already have.
To conclude in the words of the philosopher Bostrom: “It would be a huge tragedy if machine superintelligence were never developed. That would be a failure for our earth-originating intelligent civilisation. Artificial intelligence is the technology that unlocks this much larger space of possibilities, of capabilities, that enables unlimited space colonisation, that enables uploading of human minds into computers, that enables intergalactic civilisations with planetary-sized minds living for billions of years.”
With the help of AI, so advanced may our civilisation become there may come the day when, as Bostrom notes, we cannot be sure that we’re “not already in a machine”. But if we ever find we are mistaken about the nature of our reality, that’s OK. After all, to err is human. ■
Newton Lee is president of the Institute for Education Research & Scholarships, adjunct professor at Woodbury University, editor-in-chief at the Association for Computing Machinery, and education & media advisor to the US Transhumanist Party