Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Beyond data privacy issues, artificial intelligence applications have the potential to exacerbate many of the challenges the world faces with bias and discrimination. Photo: Shutterstock

Basic ethical questions of transparency and bias in AI remain unsolved, experts say

  • Big Tech companies are expected to perform a delicate balancing act in advancing AI development, while ensuring the technology’s ethical application
  • Proponents see AI as an opportunity to transform a broad swathe of industries, including transport, financial services, retail and media
Artificial intelligence (AI) has become a powerful tool around the world, but advances in the technology have also introduced a big challenge in terms of its ethical use, experts said on Thursday at the South China Morning Post’s annual China Conference.

“AI is genuinely a huge challenge to data privacy,” said Alan Chiu, managing partner at Hong Kong law firm ELLALAN. “Big data analysis, at least in some ways, inherently conflicts or stretches the limits of fundamental data protection principles.”

Without elaborating, Chiu declared that “most AI technologies lack transparency”.

Any trepidation about AI, however, has long been quashed by proponents who see AI as an opportunity to transform a broad swathe of industries, including transport, financial services, retail and media. That is especially true in mainland China, where the Big Tech companies represent the AI sector’s biggest investors, developers and users.

01:46

AI instructors teach student drivers in Shanghai how to get behind the wheel

AI instructors teach student drivers in Shanghai how to get behind the wheel
AI-driven identity verification using facial recognition, for example, has been widely adopted in China. The technology has become an integral part of apps from mobile payments and travel to retail, as well as surveillance systems and online platforms for government services.
This development, however, has made data privacy and cybersecurity major issues in the world’s second-largest economy and home to nearly 1 billion internet users. China is grappling with data privacy concerns, as the underground trading of personal information thrives amid Beijing’s push to have the digital sector play a bigger role in expanding China’s economy.

The concerns Chiu raised in the conference highlights the delicate balancing act Big Tech companies must perform in advancing AI development, while ensuring the technology’s ethical application.

AI systems should be human-centric, putting the welfare of consumers “at the front and centre of any AI project”, he said.

With the upcoming Personal Information Protection Law (PIPL), Data Security Law and pending local regulations, China is now moving to build a data governance regime that seeks to strike a fine balance between protecting user privacy in a thriving digital economy with a viable data market and maintaining strong government control.

The much-anticipated PIPL will be China’s first law dedicated to protecting personal information. It is currently undergoing a second round of review and is expected to be rolled out later this year.

China’s internet giants are each expected to create an independent oversight body for protecting users’ information under the PIPL, as Beijing tightens its scrutiny of how Big Tech companies gather and make use of private data.
Beyond data privacy issues, AI applications have the potential to exacerbate many of the challenges the world faces with bias and discrimination.

That occurs partly because of the data used to train machines, according to Jyh-An Lee, a law professor from the Chinese University of Hong Kong (CUHK).

“The readily available data might be biased in favour of existing practices, and sometimes that bias might also result from data selection, which means the input data does not accurately represent the population or context,” Lee said.

He cited as an example Amazon.com’s use of AI for recruitment, which favoured male applicants “because the data they used to train the AI were CVs of their male employees”.

“Data scientists might choose to introduce the so-called bias balancing representation in the data,” he said. “More companies have also created AI ethics boards to try and define the company’s policies surrounding AI more clearly.”

The current legal framework, however, needs further strengthening to better protect AI-generated works, such as poems or paintings, in copyright cases.

As copyright law requires a minimum degree of human creativity or intervention, “in most of the jurisdictions, such as the United States and EU, it is extremely difficult to use copyright [law] to protect AI-generated work”, CUHK’s Lee said.

But the situation may be improving. Lee cited what he called “a benchmark case in China” where Tencent Holdings successfully claimed the copyright of a news article written by a program called Dreamwriter. Tencent won its lawsuit against portal WDZJ, which copied the article without consent. In January 2020, a Shenzhen court ruled that Tencent owned the article’s copyright.

Lee said that Dreamwriter’s autonomous operation actually reflected its developers’ personalised selection and arrangement of data, which triggered the writing of the article.

Still, it is not easy to identify human intervention for every AI program, Lee said. “Human creativity may not be that easily applicable to all kinds of AI creations,” he said. “That is because AI models are usually applied in a black box … It is actually very difficult to find some kind of specific human intervention or human creativity.”

Post