Advertisement
Advertisement
Stephen Hawking (above) and other scientists, entrepreneurs and investors published an open letter warning against the potential dangers as well as benefits of artificial intelligence. Photo: EPA

Contemplating whether machine intelligence could render humankind redundant

Rethinking the idea of an explosion in machine intelligence rendering humankind redundant

The term singularity may refer to gravitation, which is associated with black holes and the Big Bang.

But there is another sense in which scientists sometimes use the word, which may refer to a technological explosion that would completely alter the balance between people and machines. In this respect, it is usually cited in reference to artificial intelligence. Early this year, a group of prominent scientists, entrepreneurs and investors in the field of artificial intelligence, including physicist Stephen Hawking, billionaire businessman Elon Musk and Frank Wilczek, a Nobel laureate in physics, published an open letter warning against the potential dangers as well as benefits of artificial intelligence.

It caused a stir because of the calibre of the people involved.

But such warnings about an uncontrollable explosion in machine intelligence that may render humankind redundant or obsolete have a long tradition. The influential statistician IJ Good, a close colleague of Alan Turing, may have been the first to write about the idea of singularity as we use the term in his 1965 article, . He wrote: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

Artificial intelligence researcher Ray Solomonoff , the philosopher David Chalmers and Ray Kurzweil's popular 2005 book have made similar predictions. The central idea is this: the famous Moore's Law predicts computing speed doubles every two years. So two years after artificial intelligence reaches human-level intelligence, its speed doubles. One year later, it doubles again. Six months, three months, 1.5 months. Singularity.

Chalmers offers an alternative argument on the premise that speed and intelligence are logically independent.

Suppose computing speed doubles each time while machine intelligence only increases by 10 per cent. Within four years, both speed and intelligence would still reach singularity.

Not everyone, is convinced of a coming technological singularity.

Andrew McAfee of the MIT Sloan School of Management believes it is possible, but we are nowhere near to such an event. No one knows when, or even if, it is coming. But if it does come, its arrival would be so rapid that human civilisation would most likely not be ready for it.

Such a singularity would be one of the most important events in the history of this planet, comparable to, say, the explosion of life forms during the Cambrian era or the arrival of homo sapiens. And we probably wouldn't know what just hit us.

This article appeared in the South China Morning Post print edition as: The technological explosion could catch us all off guard
Post