China may already have been home to most of the world’s most monitored cities before the coronavirus pandemic, but surveillance technologies, and in particular facial recognition software, have seen a new surge in popularity as governments scramble for ways to identify potential cases and maintain security while reducing human-to-human contact. Around the world, the artificial intelligence-based technology has been increasingly deployed by law enforcement and border control to secure access and improve surveillance. But this is also not without controversy. In the wake of the Black Lives Matter protests that swept the US earlier this year, several companies including Microsoft, IBM and Amazon announced they would either pause selling police their facial recognition systems or stop producing them entirely . Here’s a summary of what we know about facial recognition technology. How does facial recognition work? Facial recognition systems involve the identification of people from a database of images, including still photographs and video. Deep learning – a subset of artificial intelligence – speeds up a system’s face-scanning capabilities, as it learns more about the data it is processing. Such systems require vast amounts of information to become faster and more accurate. Essentially, these systems generate a so-called “unique face print” for each subject by reading and measuring dozens to thousands of “nodal points”, including the distance between eyes, the width of a person’s nose and depth of the eye socket. With a network of surveillance cameras, recognition systems process a wider range of features, including subjects’ height, age and colour of clothes. On the iPhone, the built-in camera can analyse more than 30,000 infrared dots and create a crude 3D model of a user’s face. This is designed to adapt to changes in a user’s appearance, such as wearing cosmetic make-up against various lighting conditions, or wearing a hat, scarf, glasses or contact lens, according to information from Apple’s website . What is it used for? Law enforcement agencies worldwide have used facial recognition to identify and track down suspects, although the technology is generally less widely deployed elsewhere in the world than in China. China has embraced the use of facial recognition and other AI surveillance technologies across a wide variety of applications. Facial recognition has been used to detect drowsy drivers , combat crystal meth abuse and even to stop people stealing toilet paper from public restrooms. Other interesting uses of the technology that recently made headlines include automated terminals where couples can obtain marriage certificates , preventing minors from bypassing China’s strict anti-addiction measures as well as facial recognition for non-human subjects: cats and dogs and even animated characters . Why is it more relevant than ever amid the pandemic? When the world’s most populous country went into lockdown earlier this year due to the coronavirus pandemic, facial recognition software became a crucial way to identify people without in-person contact. It was used, for example, as part of the verification process for Chinese residents registering for the national initiative to assign them coloured QR codes that determined whether they had to be quarantined. Many Chinese neighbourhoods also reportedly installed facial recognition-enabled security access systems to replace those using access cards. China’s largest artificial intelligence company SenseTime saw demand surge during the pandemic , as local governments in China adopted its technology to battle the coronavirus. When a Covid-19 cluster was detected in northeastern China, for instance, the government installed sensors at subway entrances in the area that could detect whether passengers were wearing masks, their temperatures and even their identities with their faces covered. Why is facial recognition controversial? Chinese people have been seen in the past as happier to trade privacy for security than their Western counterparts, but this appears to be changing as more grow nervous about privacy and security risks . Baidu CEO’s Robin Li sparked a furious reaction online in 2018 after saying Chinese people are willing to give up data privacy for convenience, with some pointing out that it was not that they were “willing to” give up data privacy but that some apps and services did not give them a choice. In November last year, a law professor in east China sued a wildlife park for breach of contract after it replaced its fingerprint-based entry system with one that uses facial recognition. A survey of over 6,000 people in China at the end of last year found that almost 80 per cent were worried about data leaks . And perhaps they should be – such leaks are rampant , and most recently images of people with masks have been found for sale in the country for as little as US$0.007 each. A large part of the problem is that facial recognition is a comparatively new field, and regulation on who has the right to collect and store images of people and how securely they are being kept has not yet caught up with the technology, which is so inexpensive that it is being used by gyms, restaurants, supermarkets and amusement parks. It is not just in China. American facial recognition company Clearview AI, whose main clients are US law enforcement agencies , has reportedly amassed more than 3 billion images scraped from sites like Twitter, YouTube and Facebook without the knowledge or consent of the owners despite requests from the social media platforms to stop using these photos. In one of the largest consumer privacy settlements in US history, Facebook also paid US$550 million early this year to avert a trial over its photo-scanning technology, which users said gathered and stored their biometric data without their permission. With advances in the technology and the rising popularity of online financial services, some experts have warned that scammers could use deepfake videos to fool facial recognition and access victims‘ internet banking or mobile payment accounts. However, there have been no confirmed cases of this happening so far. Aside from global concerns about privacy and security, facial recognition has also come under fire for the role it could play in perpetuating racial and gender biases . Like all AI technologies, the accuracy of facial recognition relies heavily on the data it is trained on, and this data is likely to be skewed depending on where it is collected from. A US study published last December tested facial recognition algorithms from 99 developers worldwide and found they generally returned a high number of false positives, wrongly considering photos of two different individuals to show the same person, for East Asian people. However, for a number of algorithms developed in China – including those by leading facial recognition start-ups Megvii and Yitu Technology – this effect was reversed, showing fewer false positives for East Asian faces than Caucasians in some cases, according to the study by the National Institute of Standards and Technology (NIST). More people are realising that AI can be sexist and racist Chinese AI companies including Hikvision, Megvii, SenseTime and Yitu were identified in an April 2019 report by The New York Times for offering features that allowed Uygur Muslims to be identified through facial recognition. SenseTime and Megvii have denied ethnic profiling. Hikvision started phasing out minority recognition in 2018, the Times reported, citing people familiar with the matter. More recently, Zhejiang Dahua Technology, a Chinese surveillance company with eyes on the US market, fell under scrutiny after researchers found software code that appears to enable ethnic profiling of China’s Uygur minority using AI. And while researchers have been discussing possible biases in AI technology for some time, the issue took on a greater significance in the US in the wake of the Black Lives Matter protests this year, with fears that such technology could be used to discriminate against ethnic minorities prompting some big tech firms to pause selling police their facial recognition systems or stop producing them entirely . So what’s being done about these issues? Amid growing pressure, authorities and companies worldwide are working on setting standards and drawing up rules . In March, China updated guidelines on collecting biometric data and consent requirements to stipulate that from October, users need to give active consent for the collection of biometric data, either through a pop-up window, a prompt or other means. Service providers also have to tell users about the purpose, method, and scope of collection, along with offering other information. The European Union, which has been strict on privacy issues compared with other regions, has been mulling a temporary ban on facial recognition technology – a move Alphabet chief executive Sundar Pichai has backed. The ban has not yet materialised, although the EU also has not ruled it out. More tech companies have also been taking an active role in setting standards for the technology. Megvii was the first facial recognition company in China to issue guidelines on the technology’s technical security and ethics in July last year, following a similar initiative from Google’s Pichai, who published a list of AI principles in the wake of the now-abandoned Maven project . Other companies that have joined the chorus include SenseTime, another major Chinese AI company, which took the lead of a national standardisation group for facial recognition technology last fall as well as Xiaomi and iFlyTek, which were part of this group. If I wear a mask, can I still be identified? Yes, although this wasn’t always the case. Previously, Chinese AI companies focused their research on voluntary and face-front recognition in applications that demanded high accuracy. One researcher told the Post last year that the correction rate in facial recognition systems was projected to drop by about 70 per cent if people concealed even one-fifth of their face. Others said the rate would vary depending on which part and how much of a face is covered. When the pandemic broke out and many people started wearing masks earlier this year, many facial recognition systems failed, prompting tech companies to update their software . New forms of facial recognition can now recognise not just people wearing masks over their mouths, but also people in scarves and even with fake beards. FaceGo, a Beijing-based company that creates attendance software for workplaces based on scanning employees’ faces, said earlier this year its software allowed employees to scan their faces to enter their offices even with masks on. Baidu also released what it said was the first free open-source face scan software to identify people who were not wearing protective masks, which could be used to send an alert if someone entered an office or a public space without a mask. Although the pandemic has provided the impetus for a wider roll-out of facial recognition products able to identify partially covered faces, the technology is not exactly new. As far back as 2017, Stanford University postdoctoral fellow Amarjot Singh and his team published research on an algorithm that could recognise people wearing eyeglasses, fake beards, scarves and hard hats by locating key points on the face around only the eyes and nose. Researchers from the University of Bradford in the UK also published a paper in May last year that found that facial recognition technology could achieve 100 per cent accurate identification of people even when only the top, right half or three-quarters of their faces were visible. Are there other ways to fool facial recognition? While tech companies and researchers are constantly working to improve the accuracy of facial recognition, the technology is not perfect (yet) and privacy-conscious individuals are still finding ways to avoid being identified and tracked against their wishes. Some of the more creative solutions include wearing licence plate printed clothing to fool surveillance robots into thinking you are a car, projecting images onto your face and wearing a futuristic-looking brass frame that makes it impossible for facial recognition to take measurements of the distance between your features. If you still want photos of your real face on social media but are concerned about your online privacy, University of Chicago researchers may have just the thing. They came up with a software called Fawkes that can make small, pixel-level changes to your pictures that are virtually undetectable to the human eye and yet can prevent third-party facial recognition from identifying you. What’s next for facial recognition? Despite growing concern over privacy and security, facial recognition is likely to grow in popularity across the world. The industry is projected to see a compound annual growth rate of 14.3 per cent from 2019 to 2028, with the Asia-Pacific region becoming the fastest-growing as well as the largest market globally over the coming year, online platform Research And Markets estimated in a September report. Companies and authorities are also increasingly combining facial recognition with other technologies to create advanced surveillance systems that can identify people even if their faces cannot be seen. For example, Chinese artificial intelligence start-up Watrix has a software that can identify a person from 50 metres (164 feet) away by analysing how they walk, while other companies have tested systems that recognise individuals based on their voice or even their heartbeat .