image

Artificial intelligence

AI in 2030: will it have empowered humans or destroyed us? Experts weigh in

  • Nearly 1,000 experts were asked whether artificial intelligence will have made humanity better off by 2030. Only two-thirds said yes
  • On the plus side are self-driving cars and customised health care, on the minus side fears of data abuse, loss of jobs, loss of control, and autonomous weapons
PUBLISHED : Saturday, 22 December, 2018, 10:22am
UPDATED : Sunday, 23 December, 2018, 6:42pm

The year is 2030 and artificial intelligence has changed practically everything. Is it a change for the better or has AI threatened what it means to be human, to be productive and to exercise free will?

You’ve heard the dire predictions from some of the brightest minds about AI’s impact. Tesla and SpaceX chief Elon Musk worries that AI is far more dangerous than nuclear weapons. The late scientist Stephen Hawking warned AI could serve as the “worst event in the history of our civilisation” unless humanity is prepared for its possible risks.

Using AI to get baby to sleep, and other ways it’s changing our lives

But many experts, even those mindful of such risks, have a more positive outlook – especially in health care and possibly in education. That is one of the takeaways from a new AI study by the Pew Research Centre and Elon University’s Imagining the Internet Centre.

Pew canvassed the opinions of 979 experts over the summer, a group that included prominent technologists, developers, innovators, and business and policy leaders. Respondents, some of whom chose to remain anonymous, were asked to weigh in on a weighty question: “By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them?”

Nearly two-thirds predicted most of us will be mostly better off. But a third thought otherwise, and a majority of the experts expressed at least some concern over the long-term impact of AI on the “essential elements of being human”. Among those concerns were data abuse; loss of jobs; loss of control as decision-making in digital systems is ceded to “black box” tools that take data in and spit answers out; an erosion in our ability to think for ourselves; and yes, the mayhem brought on by autonomous weapons, cybercrime, lies and propaganda.

“There’s a quite consistent message throughout answers … that some good things would emerge and there were some problems to worry about,” says Lee Rainie, director of internet and technology research at the Pew Research Centre.

Janna Anderson, director of the Imagining the Internet Centre, says that some respondents thought things would be OK up to 2030, “but I’m not sure after that”.

Andrew McLaughlin at Yale, who was deputy chief technology officer in the administration of US president Barack Obama and was global public policy lead at Google, says: “My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognisable.

“AI will drive a vast range of efficiency optimisations but also enable hidden discrimination and arbitrary penalisation of individuals in areas like insurance, job seeking, and performance assessment.”

Technology blogger Wendy Grossman writes: “I believe human-machine AI collaboration will be successful in many areas, but that we will be seeing, like we are now over Facebook and other social media, serious questions about ownership and who benefits.

“It seems likely that the limits of what machines can do will be somewhat clearer than they are now, when we’re awash in hype. We will know by then, for example, how successful self-driving cars are going to be, and the problems inherent in handing off control from humans to machines in a variety of areas will also have become clearer.”

Leonard Kleinrock, Internet Hall of Fame member, replies: “As AI and machine learning improve, we will see highly customised interactions between humans and their health care needs.

“This mass customisation will enable each human to have her medical history, DNA profile, drug allergies, genetic make-up, etc, always available to any carer/medical professional.”

Alibaba uses AI to speed up detection of pregnant pigs seven times

Robert Epstein, senior research psychologist at the American Institute for Behavioural Research and Technology says that, “By 2030, it is likely that AIs will have achieved a type of sentience, even if it is not human-like. They will also be able to exercise varying degrees of control over most human communications, financial transactions, transport systems, power grids, and weapons systems … and we will have no way of dislodging them.

“How they decide to deal with humanity – to help us, ignore us or destroy us – will be entirely up to them, and there is no way currently to predict which avenue they will choose. Because a few paranoid humans will almost certainly try to destroy the new sentient AIs, there is at least a reasonable possibility that they will swat us like the flies we are – the possibility that Stephen Hawking, Elon Musk and others have warned about.”

A social scientist who remained anonymous said: “My chief fear is face-recognition used for social control. Even Microsoft has begged for government regulation! Surveillance of all kinds is the future for AI. It is not benign if not controlled.”

Yet another anonymous respondent offered a different concern: “Knowing humanity, I assume particularly wealthy white males will be better off, while the rest of humanity will suffer from it.”

Ben Shneiderman, founder of the Human Computer Interaction Centre at the University of Maryland, offers a very bullish take: “Automation is largely a positive force, which increases productivity, lowers costs and raises living standards. Automation expands the demand for services, thereby raising employment, which is what has happened at Amazon and FedEx.

“My position is contrary to those who believe that robots and artificial intelligence will lead to widespread unemployment.”

Wendy Hall, a professor of computer science at the University of Southampton in the UK and executive director of the Web Science Institute, says: “It is a leap of faith to think that by 2030 we will have learnt to build AI in a responsible way and we will have learned how to regulate the AI and robotics industries in a way that is good for humanity.

Google launches AI health service in Thailand to screen for diabetic eye disease

“We may not have all the answers by 2030, but we need to be on the right track by then.”