Advertisement
Advertisement
Artificial intelligence platforms will entail difficult ethical choices. An artist impression of Pop.Up, a modular vehicle designed by Airbus and Italdesign, a concept vehicle system which makes use of an artificial intelligence platform. The concept premiered at the Geneva International Motor Show on March 7, 2017 . Photo: Airbus/Italdesign handout courtesy of EPA
Opinion
The View
by Peter Guy
The View
by Peter Guy

Wealth advisers are doomed in our new artificial intelligence-enabled world

‘Humans are supposed to be the most sophisticated data-processing system, but this is no longer the case as AI ascends’

Driverless cars and the technology behind them – artificial intelligence (AI) – may not actually arrive and herald a new world.

Past revolutionary visions of technology were based in the industrial and post industrial revolutions. The issues of the day were how to use new technologies like electricity, radio and computers. Today, we are shifting moral authority to complex algorithms and losing the ability to find our way.

Artificial intelligence, under today’s technological limits, cannot surmount “the trolley problem”, according to a study by Azim Shariff, an assistant professor of psychology at the University of Oregon and director of the Culture and Morality Lab at the University of California, Irvine. There are several scenarios based on the trolley accident problem.

If AI is to play a major role in our lives it will have to be able to make choices that represent the dangerous, if not lethal, intersection between morality and technology

My favourite is: a self-driving car carrying a family of four on a mountain road spots a bouncing ball ahead of them. Then, a child runs out into the road to retrieve the ball. The car’s algorithm must quickly decide if the car should risk its passengers’ lives by swerving to the side – where the edge of the road leads off a steep cliff? Or should the driverless car stay on course, thereby ensuring its passengers’ safety, but hitting the child?

This scenario and many others pose moral and ethical dilemmas that car makers, car buyers and regulators must address before vehicles should be given full autonomy, according to a study published recently in Science magazine.

An AI designer watching the events unfold must have programmed the ability to make a moral choice between an intervention that sacrifices one person for the good of the group or one that protects an individual at the expense of the group.

The most challenging AI dilemma can be derived from the 2016 movie, Sully, where the commercial pilot Chesley Sullenberger must make an instinctive snap decision based on decades of flight experience to save the passengers and land his damaged plane on New York’s Hudson River instead of returning to the airport.

Clint Eastwood and Chesley Sullenberger on the set of Sully. Photo: Warner Brothers/Keith Bernstein

f AI replaces the pilot, an extraordinary level of sophistication and data access is needed to assess the risk and arrive at such a bold, human-like decision. If AI is to play a major role in our lives it will have to be able to make choices that represent the dangerous, if not lethal, intersection between morality and technology.

The paradox goes beyond designing and programming, as the life and death decisions can’t be refined by design. Rather, it must confront the problem of how to replicate sentient intelligence and all of its unknown decision paths and outcomes. The only way driverless cars will hit the road is when their manufacturers accept liability for damage.

Data appears to be evolving into a new ethical system. Humans are supposed to be the most sophisticated data-processing system, but this is no longer the case as AI ascends. An external algorithm using cloud data could be capable of understanding human feelings, emotions, choices and desires better than you understand yourself. That’s when humans become redundant.

To replicate sentient intelligence, algorithms need to be able to use cloud data. But, there’s already too much data being stored beyond the capabilities of current analysis. For example, financial data experts say they are required by banks, regulators and financial institutions to store an avalanche of data since the financial crisis. One expert said he asked his bank clients, “How many more data warehouses do you want to open?”

The only way driverless cars will hit the road is when their manufacturers accept liability for damage

I

The dirty secret, he explained, is that financial institutions are collecting a lot of data without being able to use it. Until cloud data can be fully established for everything we know and technology can utilise it fully, the AI dream cannot be truly achieved.

The biggest and most visible threat from emerging financial technologies is to bank tellers and wealth advisers. The elusive concept of banks becoming technology companies is a fallacy because tech people don’t make warehousing decisions. Banking success will require doing more than assuming appropriate technologies. Dominant institutions also need to understand what sort of banking model will be pre-eminent in five to 10 years time.

Photo: Airbus/Italdesign handout courtesy of EPA

Yet, if you install any app with geo-location access, banks know where you work and when you arrive at your office. Add to that all of your desires, whatever you type into your search engine for purchases you want to make, plus those you actually did make. They know your entire spending behaviour. This represents a huge opportunity that other rivals cannot access because the data is owned by banks. Analyse this data and you have a clear picture of a person and their risk profile. Banks know a lot more than just your bank balance.

Peter Guy is a financial writer and former international banker

This article appeared in the South China Morning Post print edition as: AI world a big dream
Post