Can machines think?
In 1950, famed London scientist Alan Turing, considered one of the fathers of artificial intelligence, published a paper that put forth that very question.
But as quickly as he asked the question, he called it "absurd". The idea of thinking was too difficult to define. Instead, he devised a separate way to quantify mechanical "thinking".
"I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words," he wrote in the study that some say represented the "beginning" of artificial intelligence. "The new form of the problem can be described in terms of a game which we call the 'imitation game'."
What he meant was: Can a computer trick a human into thinking it's actually a fellow human? That question gave birth to the "Turing Test" 65 years ago.
Last weekend, a computer passed that test.
For a computer to pass the test, it must dupe 30 per cent of the human interrogators who converse with the computer for five minutes in a text conversation. It's up to the humans to separate the machines from their fellow sentient beings.
The test was conducted by the Royal Society in London. The Russian-made computer program disguised itself as a 13-year-old boy named Eugene Goostman from Odessa, Ukraine, with a quirky sense of humour and a pet guinea pig.
Eugene was one of five entrants in the 2014 Turing Test.
"We are proud to declare that Alan Turing's Test was passed for the first time on Saturday," declared Kevin Warwick, a visiting professor at the University of Reading, which organised the event at the Royal Society in London. "In the field of artificial intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human."
Computer scientist Vladimir Veselov began work on Eugene in 2001, a year after leaving his home in Russia for the US.
The program analyses questions it receives, and searches a "knowledge base" for material before compiling a response. Some of the time it will ask a clarifying question, or draw on a stock response from memory.
During the tests each judge sat down at a pair of computers and typed in questions. One computer was linked to another with a person at the keyboard, while the other was running a program that provided replies.
The judges included Lord Sharkey, who campaigned for Turing's posthumous pardon over a conviction he received for homosexuality.
But one judge, Aaron Sloman, a philosopher and researcher on artificial intelligence at Birmingham University, was unimpressed. Sloman said he took part in the experiment to see how much progress had been made with so-called "chatbots". Speaking about Eugene, he said: "It has kept some - not all - who try it out entertained for more than five minutes. But it is essentially stupid and incompetent, no matter how many people it fools for how long."
Stevan Harnad, professor of cognitive sciences at the University of Quebec in Montreal, said that whatever had happened at the Royal Society, it did not amount to passing the Turing Test. "It's nonsense, complete nonsense," he said. "We have not passed the Turing Test. We are not even close."
Regardless of such doubts, there is the concern that such technology can be used for cybercrime.
"The test has implications for society today," Warwick said. "Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true ... when in fact it is not."
The Guardian and The Washington Post