Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Sundar Pichai, chief executive of Alphabet and its subsidiary Google, says the development of artificial intelligence is “more profound than fire or electricity” during a session of the 50th World Economic Forum annual meeting in Davos, Switzerland, on Wednesday. Photo: Reuters

Tech CEOs in Davos dodge issues by warning audiences about AI

  • Sundar Pichai, chief executive of Alphabet and its subsidiary Google, suggests a global framework for AI development efforts
  • The EU is set to unveil its plans to legislate the technology, especially in ‘high-risk sectors’ such as health care and transport

The technology industry’s most influential leaders have a new message: it is not us you need to worry about – it is artificial intelligence (AI).

Two years ago, big tech embarked on a repentance tour to Davos in response to criticism about the companies’ role in issues such as election interference by Russia-backed groups; spreading misinformation; the distribution of extremist content; antitrust violations; and tax avoidance. Uber Technologies’ new chief executive even asked to be regulated.

These problems have not gone away – last year tech’s issues were overshadowed by the world’s – but this time executives warned audiences that AI must be regulated, rather than the companies themselves.

“AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity,” Alphabet chief executive Sundar Pichai said in an interview at the World Economic Forum in Switzerland on Wednesday.

Satya Nadella, chief executive of Microsoft, stresses the need for a set of principles to govern the development of artificial intelligence during a session at the 50th World Economic Forum annual meeting in Davos, Switzerland, on Thursday. Photo: Reuters

Comparing AI to international discussions on climate change, Pichai said: “You can’t get safety by having one country or a set of countries working on it. You need a global framework.”

The call for standardised rules on AI was echoed by Microsoft chief executive Satya Nadella and IBM chief executive Ginni Rometty.

“I think the US and China and the EU having a set of principles that governs what this technology can mean in our societies and the world at large is more in need than it was over the last 30 years,” Nadella said.

It is an easy argument to make. Letting companies dictate their own ethics around AI has led to employee protests. Google notably decided to withdraw from Project Maven, a secret US government programme that used the technology to analyse images from military drones, in 2018 after a backlash.

Researchers agree. “We should not put companies in a position of having to decide between ethical principles and bottom line,” said Stefan Heumann, co-director of think tank Stiftung Neue Verantwortung in Berlin. “Instead our political institutions need to set and enforce the rules regarding AI.”

EU considers tougher rules for AI developers in ‘high-risk sectors’ such as health care, transport

The current wave of AI angst is also timely. In a few weeks, the EU is set to unveil its plans to legislate the technology, which could include new legally binding requirements for AI developers in “high-risk sectors”, such as health care and transport, according to an early draft obtained by Bloomberg. The new rules could require companies to be transparent about how they build their systems.

Warning the business elite about the dangers of AI has meant little time has been spent at Davos on recurring problems, notably a series of revelations about how much privacy users are sacrificing to use tech products.

Amazon.com workers were found to be listening in to people’s conversations via their Alexa digital assistants, leading EU regulators to look at more ways to police the technology. In July, Facebook agreed to pay US regulators US$5 billion to resolve the Cambridge Analytica data scandal. And in September Google’s YouTube settled claims that it violated US rules, which ban data collection on children under 13.

Instead of apologies over privacy violations, big tech focused on how far it has come in the past few years in terms of looking after personal data.

2019 was the year AI became a political, human rights and trade issue

Facebook vice-president Nicola Mendelsohn said in an interview on Friday that the company has rolled out standards similar to Europe’s General Data Protection Regulation (GDPR) in other markets.

“Let’s be very clear, we already have regulation, GDPR,” Mendelsohn said in response to a question about the conversations Facebook is having with regulators. “We didn’t just do it in Europe where it was actually regulated. We thought it was a very considered and useful way of thinking about things so we actually rolled a lot of that out around the world as well.”

Keith Enright, Google’s chief privacy officer, also spoke at a separate conference in Brussels this week about how the company is working to find ways to minimise the amount of customer data it needs to collect.

“We’re right now really focused on doing more with less data,” Enright said at a data protection conference on Wednesday. “This is counter-intuitive to a lot of people because the popular narrative is that companies like ours are trying to amass as much data as possible.”

China requests internet companies to tighten data privacy as consumer anxiety rises

Holding on to data that is not delivering value for users is “a risk”, Enright said.

But regulators are still devising on new laws to protect user data. The US is working on federal legislation that calls for limits on sharing customer information and, similar to GDPR, require companies get consent from consumers before sharing data with third parties.

Facebook, Amazon, Apple and Microsoft all increased the amount they spent on lobbying in Washington last year, with some of those funds going to pushing industry-friendly privacy bills.

And even though tech executives called for AI rules, they still cautioned against regulating too much, too fast. Pichai reminded lawmakers that existing rules may already apply in many cases. Lawmakers “don’t need to start from scratch”, he said.

Purchase the China AI Report 2020 brought to you by SCMP Research and enjoy a 20% discount (original price US$400). This 60-page all new intelligence report gives you first-hand insights and analysis into the latest industry developments and intelligence about China AI. Get exclusive access to our webinars for continuous learning, and interact with China AI executives in live Q&A. Offer valid until 31 March 2020.
Post