Advertisement
Advertisement
The View
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
The level of programming and algorithm development that a future AI system would needs to make a split second decision like the one Sully made to land on the Hudson River is unfathomable with current technology. Photo: AFP

Why China’s communist party may regret developing true AI technology

AI is much different than machine learning where a system is intellectually confined and chooses from a set of algorithms to solve a problem

The View

Artificial intelligence and machine learning or just plain computing have become such conflated ideas that they are not only misleading investors and but worrying the general public. The walls between the science fiction and reality are steep, but the implications and outcomes are real and daunting.

AI is basically about a computing system being smart or sentient like a human being, being able to learn and consider moral and ethical dilemmas. It is much different than machine learning where a system is intellectually confined and chooses from a set of algorithms to solve a problem.

AI is far more complex and difficult to achieve with today’s state of technology. Teaching computers to think and understand the world the same way a human brain does is a nearly impossible endeavour.

Perhaps the best way to illustrate the delicate and furious challenges that AI must meet is to use the Clint Eastwood film Sully: Miracle on the Hudson, which is about the actual heroic efforts of US Airways pilots Captain Chesley “Sully” Sullenberger and First Officer Jeff Skiles to safely land in New York’s Hudson River after its engines were struck by birds.

Modern commercial aircraft operations perfectly proscribe the limits of today’s technology in daily life. Aside from landing and take-off, where pilots are generally in full control of the plane, the rest of the journey is characterised as “automation management” where computers control the flight.

While that is not machine learning or AI, it is certainly advanced automation at the highest levels.

In Sully, humans had to and could perform what AI systems cannot currently achieve. The pilot drew on his deep human and professional experience as a graduate of the US Air Force Academy and pilot during the first Gulf War along with his commercial flying record to arrive at, in a matter of seconds, an intuitive, unconventional, risky, out-of-the-box technical and ethical decision to land in the Hudson River. He instinctively reasoned and believed there was no way the plane, regardless of piloting skill, could land safely at any nearby airports. Indeed, he concluded it would crash into a neighbourhood below.

The level of programming and algorithm development that a future AI system needs to arrive at such a risky choice, in a matter of seconds, is unfathomable with today’s state of technology. Then, consider that preliminary data from the flight recorders suggested that the port engine was still operable at idle power, which theoretically may have given the plane enough power to reach a nearby airport. An AI system would have to make highly complex analysis to weigh the risks and take action within a minute – with over a hundred lives at stake.

One of the major limitations of today’s AI functions is the availability and access to a wide array of data bases to build the kinds of situational and technical knowledge a human would blend together. “Big” and cloud data are important developments, but only in their infancy relative to what AI truly demands. Then, there is the ability to resolve moral dilemmas- taking life threatening risks in order to avoid fatalities.

Today, military forces have now incorporated a significant amount of specialised AI in terms of pattern recognition, prediction and autonomous robot navigation. But, no one has yet achieved and fused together advanced human like self-learning and moral resolution capabilities.

Experts say that the sophisticated application of automation instead of AI is more of a threat to low level, semi-skilled and unskilled workers in industries such as fast food. Current automation technology can make and serve hamburgers so that burger outlets can be virtually unmanned.

But, the main reason an automated line won’t be immediately applied to fast food in the near future is that it would increase unemployment and wage depression in lower income workers. The threat of political and economic disruptions would be so great that governments just might ban the application.

‘Skynet’ became self-aware in the film series Terminator. Photo: Handout
Last week, the Chinese government unveiled a bold development plan to become the world leader in AI by 2030. It hopes to create a domestic AI industry worth more than 10 trillion yuan (US$1.48 trillion). State pronouncements on industrial development are common in China – from One Belt One Road to landing on the moon.

But China has immense raw material to fuel development in AI and data mining in the form of vast amounts of data from its population. China possesses massive data banks analogous to Saudi oil fields; they are an element of competitive advantage. And data privacy laws in China are less cumbersome and vociferously defended as they are in western countries.

But, China’s AI challenge may create its own domestic Frankenstein monster. In the science fiction based, Robert A. Heinlein and Philip Dick kind of tradition, the hazards and calamity could be fearful for the Chinese government.

Imagine if China pioneered true, AI technology – a sentient, human thinking system. Then, with all of its infinite access to global data bases and collective human experience it logically determines that it should replace the ruling Communist Party. The ensuing catastrophe would be like “Skynet” becoming self-aware in the film series Terminator.

Science fiction fantasy? Perhaps you can replace the human brain, but not the human mind.

Peter Guy is a financial writer and former international banker

This article appeared in the South China Morning Post print edition as: Dangers of sentience
Post