Contemplating whether machine intelligence could render humankind redundant
Rethinking the idea of an explosion in machine intelligence rendering humankind redundant

The term singularity may refer to gravitation, which is associated with black holes and the Big Bang.
But there is another sense in which scientists sometimes use the word, which may refer to a technological explosion that would completely alter the balance between people and machines. In this respect, it is usually cited in reference to artificial intelligence. Early this year, a group of prominent scientists, entrepreneurs and investors in the field of artificial intelligence, including physicist Stephen Hawking, billionaire businessman Elon Musk and Frank Wilczek, a Nobel laureate in physics, published an open letter warning against the potential dangers as well as benefits of artificial intelligence.
It caused a stir because of the calibre of the people involved.
But such warnings about an uncontrollable explosion in machine intelligence that may render humankind redundant or obsolete have a long tradition. The influential statistician IJ Good, a close colleague of Alan Turing, may have been the first to write about the idea of singularity as we use the term in his 1965 article, Speculations Concerning the First Ultraintelligent Machine. He wrote: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."
Artificial intelligence researcher Ray Solomonoff , the philosopher David Chalmers and Ray Kurzweil's popular 2005 book The Singularity is Near have made similar predictions. The central idea is this: the famous Moore's Law predicts computing speed doubles every two years. So two years after artificial intelligence reaches human-level intelligence, its speed doubles. One year later, it doubles again. Six months, three months, 1.5 months. Singularity.
Chalmers offers an alternative argument on the premise that speed and intelligence are logically independent.