Advertisement

Editorial | Protecting personal data is a two-way street

  • Companies that feed customers’ information and behaviour into AI models should be transparent, while the public should understand the risks and think twice before clicking ‘agree’

Reading Time:2 minutes
Why you can trust SCMP
0
Data protection is a growing concern, especially with the rise of AI algorithms. Image: Shutterstock

Tailoring sales pitches to customers is nothing new in the history of human trade, but long gone are the days when it was merely a case of hawkers sizing up potential buyers in a market.

Today, marketers often rely on customer data gathered from credit cards, apps, e-commerce sites, shop cameras and other sources. Increasingly, this vast amount of information is used to train artificial intelligence (AI) models, computer programs that predict consumer behaviour.

Industries from retail to banking insist AI models are essential for improved efficiency, user experiences and business revenues. But while some of those benefits may be welcome, most consumers have serious reservations about the use of AI to manipulate or bypass their decision-making.

Reflecting those concerns, a Hong Kong Consumer Council survey of more than 1,000 people in the city found some 74 per cent worried about excessive data collection. The survey conducted in late 2021 also revealed that only six of 112 online stores informed customers that their data would be used for AI purposes.

The council is right to follow up the study with a call for more comprehensive privacy legislation. Policymakers around the world are discovering that drafting and enforcing privacy laws is difficult given the rapid pace at which technology is evolving.

Lack of transparency about where data ends up is a further challenge, most recently underscored by allegations from a Twitter whistle-blower claiming severe shortcomings in the social media giant’s handling of users’ personal data.

Advertisement