
Basic ethical questions of transparency and bias in AI remain unsolved, experts say
- Big Tech companies are expected to perform a delicate balancing act in advancing AI development, while ensuring the technology’s ethical application
- Proponents see AI as an opportunity to transform a broad swathe of industries, including transport, financial services, retail and media
“AI is genuinely a huge challenge to data privacy,” said Alan Chiu, managing partner at Hong Kong law firm ELLALAN. “Big data analysis, at least in some ways, inherently conflicts or stretches the limits of fundamental data protection principles.”
Without elaborating, Chiu declared that “most AI technologies lack transparency”.
Any trepidation about AI, however, has long been quashed by proponents who see AI as an opportunity to transform a broad swathe of industries, including transport, financial services, retail and media. That is especially true in mainland China, where the Big Tech companies represent the AI sector’s biggest investors, developers and users.
The concerns Chiu raised in the conference highlights the delicate balancing act Big Tech companies must perform in advancing AI development, while ensuring the technology’s ethical application.
AI systems should be human-centric, putting the welfare of consumers “at the front and centre of any AI project”, he said.
The much-anticipated PIPL will be China’s first law dedicated to protecting personal information. It is currently undergoing a second round of review and is expected to be rolled out later this year.
That occurs partly because of the data used to train machines, according to Jyh-An Lee, a law professor from the Chinese University of Hong Kong (CUHK).
“The readily available data might be biased in favour of existing practices, and sometimes that bias might also result from data selection, which means the input data does not accurately represent the population or context,” Lee said.
“Data scientists might choose to introduce the so-called bias balancing representation in the data,” he said. “More companies have also created AI ethics boards to try and define the company’s policies surrounding AI more clearly.”
The current legal framework, however, needs further strengthening to better protect AI-generated works, such as poems or paintings, in copyright cases.
As copyright law requires a minimum degree of human creativity or intervention, “in most of the jurisdictions, such as the United States and EU, it is extremely difficult to use copyright [law] to protect AI-generated work”, CUHK’s Lee said.
Lee said that Dreamwriter’s autonomous operation actually reflected its developers’ personalised selection and arrangement of data, which triggered the writing of the article.
Still, it is not easy to identify human intervention for every AI program, Lee said. “Human creativity may not be that easily applicable to all kinds of AI creations,” he said. “That is because AI models are usually applied in a black box … It is actually very difficult to find some kind of specific human intervention or human creativity.”
