Will super-intelligent machines be the undoing of mankind?

Some scientists believe we are fast approaching 'the singularity', a point where computers overtake human intelligence and ultimately become sentient

PUBLISHED : Friday, 12 June, 2015, 6:11am
UPDATED : Friday, 12 June, 2015, 6:11am

Man versus machine. It's the stuff of science fiction, but some think artificially intelligent computers are improving so quickly that machine intelligence will soon overtake the human brain. Once that happens - and some think it inevitable - reality itself could alter.

That state is called "the singularity", defined not only as the point where computers become smarter than humans, but where human intelligence can be digitally stored.

Ray Kurzweil, director of engineering at Google, in his book The Singularity is Near, says that by 2045 machine intelligence will be infinitely more powerful than all human intelligence combined, and that technological development will be taken over by the machines. "There will be no distinction, post-singularity, between human and machine or between physical and virtual reality," he writes.

The singularity is a kind of transcendence beyond which nothing can ever be the same again. An intelligence far greater than any human could conduct scientific research to cure diseases, boost the efficiency of all endeavours, and, yes, save the planet. Or it could wreak havoc, waging wars and enslaving humanity.

It could prove a heaven or a dislocated and dangerous hell, but in a world where all kinds of claims are being made about artificial intelligence, it's easy to get carried away and proclaim that the age of machines is close. The evidence is everywhere. In less than a century, we've gone from calculators to supercomputer clusters. We all walk around with smartphones in our pockets that are more powerful than personal computers from just a few years ago; the law of accelerating returns is in full swing.

Automation is rampant and the so-called Internet of Things promises smart homes, self-driving cars, augmented reality, virtual assistants and everything in between. Even neural implants, 3D-printed internal organs, "smart skin" and brain-to-brain messaging (who needs WhatsApp?) is being researched. Siri, Google Now and Cortana will soon look like digital relics of a simpler time. However, the singularity is a bit of a leap from all of that. It requires a superhuman intelligence that can replace itself with something even more intelligent.

"An ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind," said British mathematician I.J. Good back in 1965. "Thus, the first ultra-intelligent machine is the last invention that man need ever make."

The singularity as a concept has shifted since it was first expressed in the 1950s by mathematician John von Neumann. "Singularity refers to a grab-bag of hypotheses, but the common theme seems to be that technological progress is accelerating since progress feeds on itself," says machine learning expert Sean Owen, director of data science at Cloudera, a next-generation data management platform based in Palo Alto, California.

Moore's Law states that computer processing power will double every 18 months, which is a thousand-fold increase every decade. "Merely quantitative advances lead to qualitative advances like artificial intelligence, and because the progress is accelerating, whatever changes may happen very suddenly, relative to what we are ready to ingest," Owen says.

In short, technological progress will hit a tipping point and then, bang! The singularity. No one will see it coming, and it will be irreversible. Ignoring super-intelligent machines could be the worst mistake humans have ever made. If such an intelligence wasn't controlled, it could even be the last mistake.

The first ultra-intelligent machine is the last invention that man need ever make
I.J. Good, British mathematician

Mention the singularity and science fiction naturally takes over. Cue doom-laden scenarios and fears of the rise of evil artificial intelligences and robotic armies turned on humanity.

However, one science fiction author thinks that talk of the singularity is a bit far-fetched. Computers may be getting smarter, but they're not getting conscious, says Ramez Naam, a science fiction author and technology ethicist who teaches at the same Singularity University as Google's Kurzweil.

"The most successful and profitable artificial intelligence in the world is almost certainly Google Search," wrote Naam last month on his website rameznaam.com and went on to list the artificial intelligence techniques it uses, such as ranking, classifying and advertising-matching.

"In your daily life you interact with other artificial intelligence technologies [or technologies once considered artificial intelligence] whenever you use an online map, when you play a video game, or any of a dozen other activities, but none of these is about to become sentient," he says.

Sentience, thinks Naam, brings no advantage to the companies who build these software systems. "Building it would entail an epic research project - indeed, one of unknown length involving uncapped expenditure for potentially decades - for no obvious outcome. So why would anyone do it?"

As well as a lack of incentive to build truly conscious supercomputers, and few truly groundbreaking advances by computer scientists, Naam also thinks there's no need for sentient computers. "Once upon a time we imagined that a system that could play chess, or solve mathematical proofs, or answer phone calls, or recognise speech, would need to be sentient," he writes. "It doesn't need to be. You can have your artificial intelligence secretary or assistant and have it be all artifice. And frankly, we'll likely prefer it that way."

Siri is a great example of how un-sentient, and yet useful, artificial intelligence is and is likely to remain. "Systems like Siri are still fundamentally built on statistical models, and so are sophisticated parrots," says Owen. "It's hard to say what thinking is, but I believe most people don't feel that what these things do is think. There is an excessive fascination with making systems or algorithms that work 'like the brain'. This is neither necessary nor sufficient for an intelligence." Siri will get better, smarter, more helpful. But it won't ever be smarter than you.

Most computers and smartphone technologies are getting more accurate and faster because of giant leaps in processor speeds. There's nothing new about that, but we are approaching an era of computers that are inspired by the human brain.

IBM's SyNAPSE chip is a new kind of cognitive computer that doesn't just rely on noughts and ones, but on more human-like skills such as recognising images and patterns. It's the latter that's key if computers are ever going to understand the subtleties of human conversations.

Powered by a million neurons, 256 million synapses and 5.4 billion transistors, SyNAPSE has an on-chip network of 4,096 neuro-synaptic cores. Most impressively, it is aware of its surroundings.

Others are using the sheer power of processors to imitate human-like thinking as closely as possible. In August 2013, neuroscientists at the Okinawa Institute of Science and Technology Graduate University in Japan and Forschungszentrum Jülich in Germany used one of the world's best supercomputers, the K computer in Kobe, Japan, to mimic a human brain's activity. Scientists simulated a network of 1.73 billion nerve cells connected by 10.4 trillion synapses. It took 40 minutes for the K computer to simulate a single second's worth of the neuronal network activity that occurs in a real, biological human brain. Doing so took the same memory as in a quarter of a million personal computers.

The singularity therefore isn't close, but if the human brain can already be simulated, you can bet that it will be done faster next time. "It will be completely transformational in the long term," says Owen. "It's either part of a transcendence or an apocalypse."

Either way, the human brain has 20 years left at the top.