Advertisement
Advertisement
Alex Lo
SCMP Columnist
My Take
by Alex Lo
My Take
by Alex Lo

Man vs machine: we are a long way from real artificial intelligence

  • Before you buy into the ChatGPT hype, perhaps you should first listen to the criticism of current AI by some eminent philosophers

Do philosophers ever have anything to offer investors? I know it’s a strange question, but when it comes to artificial intelligence (AI), it turns out that those who work under the intellectual umbrella of cognitive science, linguistics and philosophy of language have a lot to say. For shorthand here, I will refer to such researchers simply as “philosophers”. One reason is that the towering intellectual figure Noam Chomsky, whom I will discuss below, has revolutionised all three disciplines.

Hot new tech has a tendency to blow bubbles among investors. It’s primed for hype and speculation. Its promoters usually overpromise and underdeliver, even if the underlying innovation eventually pans out such as the internet and e-commerce, but not before the dotcom bubble and its spectacular bursting. Often, though, they don’t even deliver. Cryptocurrencies, anyone?

At the moment, AI is all the rage. Thanks to ChatGPT, the machine-learning programme that promises to revolutionise AI, some people are claiming the imminent arrival of long-promised machine intelligence that will rival, if not supersede human intelligence, or what people in the trade sometimes call general AI. No wonder the largest IT firms, and some of the world’s biggest private equities and venture capital investors, are pouring money into AI companies like there is no tomorrow.

It won’t be the last time that a new tech is promising to be the next big thing while captivating the imagination of the public. Here, philosophers may help put a brake to the overenthusiasm. It also won’t be the first time philosophers have helped to dampen public expectations in the history of AI.

ChatGPT interacts with users using natural-like language. So if you ask it a question, say, “Why do some people reject the theory of evolution?”, it could come back with a short answer, or an essay-length reply. That has created the impression of a human-like intelligence, but is it?

China is playing catch-up in a ChatGPT world, Chinese lawmaker says

In May, Elon Musk, sometimes the world’s richest man, tweeted: “2029 feels like a pivotal year. I’d be surprised if we don’t have AGI [artificial general intelligence] by then.”

In December, the supposedly non-profit OpenAI released ChatGPT, which amassed 100 million users in just two months, handily beating previous record-holders Instagram (2.5 years) and TikTok (9 months). Now, the chatbot-cum-search-engine is on everyone’s lips. Though not publicly listed, The Wall Street Journal reported in January its whopping valuation of US$29 billion through private equity trading, on a projected revenue of US$1 billion next year.

Any start-up with AI in its name or its research résumé attracts eager investors. According to GlobalData, 3,198 AI start-ups received US$52.1 billion funding across 3,396 venture-capital funding deals from last year.

But is all the hype justified? In a scathing op-ed in The New York Times last week that collected more than 2,100 reader responses and has since gone viral, Chomsky and two co-authors decried the hype surrounding ChatGPT and other machine-learning tools like it.

Titled “The False Promise of ChatGPT”, the trio wrote: “These programs have been hailed as the first glimmers on the horizon of artificial general intelligence – that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.

“That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments.”

Ouch! Such machine-learning programs take “huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs – such as seemingly human-like language and thought”.

That is, obviously, extremely useful for all sorts of usage across different professions and intellectual disciplines. But what Chomsky and his colleagues argue is that it is still specific AI, something that has been around for decades, though perhaps better, faster and covering many more tasks. Yet, it is not general AI that is equivalent or comparable to human intelligence.

“It is at once comic and tragic … that so much money and attention should be concentrated on so little a thing [machine learning] – something so trivial when contrasted with the human mind,” they wrote.

“The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.”

This is not the first time a pre-eminent philosopher has intervened in the AI debate. In the early 1970s, the late American philosopher Hubert Dreyfus published a series of papers and the influential book, What Computers Can’t do, in which he argued human intelligence is not a disembodied cognitive machine that runs on explicitly written rules or representations like algorithms. It’s not like a brain isolated in a jar, the kind of bodiless intelligence envisaged by Rene Descartes at the dawn of modern Western philosophy and science.

A follower of the phenomenologists Martin Heidegger and Maurice Merleau-Ponty, Dreyfus claims that all our higher mental functions derive from what Heidegger calls “being in the world”, that is our active engagement, say, in the world of surgeons, if you are a surgeon; the world of accountants if you are an accountant; or carpentry if you are skilled in woodwork. All such skills are embodied intelligence, creativity and knowledge that are worlds apart from computing. They cannot be coded into computers or AI.

Interestingly, Chomsky attacks AI from the opposite end. He was a Cartesian; I don’t know if he still is, but probably not. One of his most fascinating books is Cartesian Linguistics, first published in 1966. His revolutionary school of linguistics is highly formal, rules-based and algorithm-driven. Indeed, one of his linguistic-mathematical techniques was used in the Human Genome Project during the 1990s to detect recurrent genetic patterns.

The syntactical rules that Chomsky and his school formalised to mimic human languages contributed to computer coding and the development of AI. But his fundamental – and to me, the most captivating – philosophical position is what he calls the “‘infinite use of finite means’, creating ideas and theories with universal reach”, based on, and constrained by, a small (or finite) number of rules.

Learning a language, or our innate acquisition of grammar is Chomsky’s primary field of study. “The child’s operating system [for language] is completely different from that of a machine learning program,” he and his colleagues wrote in the Times.

“Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered.”

But machine learning is the exact opposite. They argue: “Such programs are stuck in a pre-human or non-human phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case – that’s description and prediction – but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.”

A hallmark of human intelligence is its efficiency because it has limited computing power and memory.

ChatGPT-like AI is ‘difficult to achieve’, China’s tech minister says

“Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered,” the trio wrote.

“ChatGPT and similar programs are, by design, unlimited in what they can ‘learn’ (which is to say, memorise); they are incapable of distinguishing the possible from the impossible … For this reason, the predictions of machine learning systems will always be superficial and dubious.”

Consider the key equations that Albert Einstein used to develop the general theory of relativity. They were already discovered before him, but he used them to come up with a new theory of gravitation. By analogy, AI can tell you all about those equations, but it will never come up with a new theory like Einstein did.

Now, most of us are not experts in linguistics, computers or AI. But we are all familiar with human intelligence, or stupidity. As Chomsky says, it’s precisely because we are prone to errors or make stupid mistakes that we can be said to be intelligent. What Dreyfus and Chomsky say about human intelligence and creativity make a lot more sense to me than AI.

Current specific (as opposed to general) AI capabilities are, of course, already impressive and extraordinary. That may be one reason we think they are human-like. An automated stock-trading program will beat the vast majority of human investors both for the long and short terms. Some international wire services already use machines to write simple news stories. There are automated kitchens that can produce gourmet food.

And yes, they will replace many human jobs and no doubt will be worth billions in any market economy. But if you are an investor, perhaps you should still evaluate them as specific, rather than general, AI. Don’t buy into the hype.

17