The topic of facial recognition has stirred up quite a lot of controversy in recent years. There are growing concerns, particularly over data privacy and data protection, especially now that such technology is so prevalent. Some people use facial recognition daily to unlock phones or authorise payments, and many encounter it in security at workplaces or airports. However, people have started to question the ethics as well as the accuracy and potential bias of facial recognition technology. In a recent study by the Massachusetts Institute of Technology , it was found that the technology can be gender- and/or race-biased, probably due to imbalances in the training data used. In another study last year, 28 US Congress members were mistakenly identified as criminals. Some cities in the US have even started to ban the use of the technology, including by the police. British MPs believe legislation should be put in place before facial recognition trials start, while the European Union is seeking regulations to limit what it calls the “indiscriminate” use of facial recognition. These events are not surprising for any new technology with such a wide potential impact on our lives. Careful attention and scrutiny should be paid to the accuracy of the technology, as well as to informed consent, legal liabilities and ethical use. These are all healthy steps towards ensuring legitimate use while respecting the rights of individuals. But artificial intelligence and deep-learning technology need not be privacy invasion tools. How good are facial recognition systems in identifying people? The AI technology behind facial recognition has much wider uses. It can be used not only to identify a person, but also to detect the presence of a face and its features, as well as to predict health, emotions and mental states. Depending on the application and what algorithms are used, the data privacy implications may be different. Recognising faces is also only one of the many important features deep learning technology has to offer. The most interesting is probably facial analytics, which analyse photos or videos to extract facial characteristics and predict gender, age and even body mass index. It has been used to detect early signs of diabetes, heart disease or dementia, and in measuring heart rates and blood pressure. Insurance companies are using facial analytics to simplify underwriting. The analytics allow the companies to predict a person’s state of health, their lifestyle and life expectancy without a medical exam. There are also numerous applications in health care, such as automating the monitoring of patients’ conditions and emotions as well as detecting behavioural, developmental or mental disorders. In the retail industry, understanding a person’s emotions is very important, giving retailers valuable insights into customers’ perceptions of their products. Chatbots are another popular AI technology: the ability to detect emotions from facial images or voices will allow conversational bots to be more empathetic in their responses. China AI expert on facial recognition’s privacy problem and social good AI and machine learning have the power to radically change how we work and live, and transform how companies do business, offering people unprecedented convenience and companies innovative products and services. The challenge is to be able to do so without sacrificing privacy, ethics and fairness. With AI making important decisions about our lives, it will need to be trusted. Mechanisms need to be put in place to ensure users can trust recommendations, predictions and decisions made by AI systems. AI needs to be explainable rather than a black hole. Obviously, without explanations, decisions made by AI will be hard to justify to auditors or regulators. AI algorithms such as facial recognition and facial analytics won’t be trusted if they’re not fair. However, AI is as fair and unbiased as the data sets it learns from. Data deficiencies may also undermine the accuracy of decisions, predictions or analysis that AI systems produce. Therefore, it is of utmost importance that only quality unbiased data sets are used for machine learning, particularly to ensure AI will be compliant with various fairness regulations. Professor Andy Chun is the convenor of the AI Specialist Group within the Hong Kong Computer Society. He is also an honorary adjunct professor at the City University of Hong Kong. He currently works as the regional director of technology innovation at Prudential Corporation Asia For more insights into China tech, sign up for our tech newsletters , subscribe to our Inside China Tech podcast , and download the comprehensive 2019 China Internet Report . Also roam China Tech City , an award-winning interactive digital map at our sister site Abacus .