Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Cameras that use facial recognition technology are becoming more common around the world, but are they a breach of our civil rights? Photo: Alamy

Big Brother or necessary surveillance – does facial recognition tech infringe on our privacy and civil rights?

  • Police, governments and even shops around the world are keeping tabs on people through AI-assisted facial recognition systems
  • In the United States the FBI has 30 million citizens on its facial recognition database, while China’s is said to contain almost the whole population

Many people worried about facial recognition technology have focused their attention on the popular application FaceApp which, among its other functions, allows users to see what they will look like when they get older.

FaceApp, a morphing application described as a program that “will transform your face using AI in just one tap” by its Russian developers, went viral in 2017 with around 80 million users.

To make it work, users select a selfie that is then uploaded to the company’s servers, where its proprietary technology processes it.

Rumours that FaceApp was storing selfies on its server to use as data for undisclosed facial recognition purposes connected to the Russian government abounded, but they proved to be unfounded.
Screenshots from FaceApp. There have been rumours that Russia was collecting people’s selfies through this app. Photo: courtesy of FaceApp

FaceApp uses Amazon servers in the United States to store users’ images, and the company seemingly has no links to the Russian authorities.

Nonetheless, the rumours, which were widely reported, served to bring the issue of facial recognition technology into the limelight.

In fact, the problems with facial recognition technology lie much closer to home – with the domestic police forces, other government agencies, and shops that are using it.

In Britain, police forces like London’s Metropolitan Police and the South Wales Police have been testing out the technology.

Some shops in the UK have admitted using commercial software called Facewatch to match the faces of those entering store premises to pictures in a police database of known shoplifters.
Visitors at the Security China 2018 expo in Beijing, China were tracked by facial recognition technology from state-controlled surveillance equipment manufacturer Hikvision. Photo: AP/Ng Han Guan

In the US, the FBI’s Next Generation Identification-Interstate Photo System – a facial recognition program – searches a database of around 30 million citizens to support criminal investigations.

Around 80 per cent of these photographs are of convicted criminals, but that means that six million are of innocent citizens. The database consists of a mix of criminal mug shots, and photos such as those on passports and driver’s licences provided by participating state authorities.

In mainland China, the situation is more alarming. According to a report by CNBC, China’s facial recognition database includes nearly everyone in the country, and a program developed by Yitou Technology can recognise a face in the database in around two seconds.

Saving face: China must reflect on its use of surveillance tech

The database is compiled from photos used on citizens’ ID cards, and maintained by the central government with support from private companies.

The country’s 170 million surveillance cameras also play a role – a company called Sense Time, for instance, can extract faces from surveillance videos and make them searchable in a database. Sense Time’s program is already being used by the police.

Facial recognition programs are powered by proprietary algorithms, so each version of the technology is different. But the fundamentals are the same for all of them.

The technology uses machine learning, a computer process which underlies all artificial intelligence programs. Machine learning involves feeding a computer a vast data set. The computer then uses an algorithm to process the data to train itself to achieve a certain task.

The FBI’s facial recognition database has information on 30 million people. Photo: courtesy of the FBI

For facial recognition systems, libraries of facial images are uploaded to a computer, which measures data points like the distance between the eyes, to create a template.

In simplified terms, when a new image – a “probe” – is fed into the system, it compares the template of the new image with those already in its library. The output varies from system to system.

The FBI’s computer returns up to 50 facial images, for instance, which are then evaluated by humans to see if they are worth following up as leads.

It makes sense to use AI to fight crime

The threats that facial recognition software pose for the civil rights of citizens are self-evident, with many in the West noting that it could lead to a to a “Big Brother” society – a reference to British writer George Orwell’s anti-totalitarian classic novel 1984, in which the state controls citizens by tracking their every move.

“Facial recognition surveillance risks making privacy in Britain extinct,” says Silkie Carlo, the director of Big Brother Watch, a UK civil liberties group focusing on surveillance.

In China, the government has been condemned by international rights groups for using facial technology to track the movement of Uygurs.

Civil rights organisations say the use of facial recognition technology is unethical.

Along with the use of the programs, the ethics of the databases themselves are being questioned.

So far, databases being used for facial recognition programs in the West have been compiled from photographs originally taken for other purposes, and they are used without the subject’s consent or knowledge.

People whose photos are in the FBI database, for instance, don’t know that they are there.

The legality of such practices is murky.

Ed Bridges, a British citizen, has taken legal action against the South Wales police, claiming that the use of facial recognition technology on him was an unlawful violation of his privacy.

China to expand use of facial recognition to manage traffic violations

Bridges says he was scanned by the police twice, once while he was out shopping, and once while he was at a peaceful demonstration. Bridges claims that the use of the technology breaches privacy and equality laws.

The fact that the facial recognition technology used in the UK and the US does not work very well poses another problem.

A test conducted on Amazon’s Rekognition technology, a face recognition program that the company is marketing to police in the US, by the American Civil Liberties Union, misidentified 28 US politicians as criminals.

Statistics obtained by Big Brother Watch in the UK showed that 96 per cent of the Metropolitan Police’s facial recognition matches “misidentified innocent members of the public” in recent trials.

A screenshot from Facewatch, commercial software used to match the faces of those entering shops to pictures in a police database of known shoplifters. Photo: courtesy FaceWatch

Other tests have noted that commercially available programs return less accurate results for women, people of colour, and children than they do for white adult males.

Police and manufacturers have defended the technology by saying that humans often make mistakes in identifying felons – noting that around 70 per cent of wrongful convictions have been due partially to faulty human memory.

As facial recognition technology is relatively new, there are no regulations that specifically govern its use.

Regulation would seem to be the next step, but civil liberties campaigners say that is missing the point – the debate should not be about how to regulate facial recognition technology, but whether it should be used at all.

Scanning people’s faces as they lawfully go about their daily lives, in order to identify them is a potential threat to privacy that should concern us all
Information Commissioner’s Office, London

In May, the city of San Francisco issued a municipal ordinance banning the use of face recognition outright, and other US cities and states are considering following suit.

In the UK, the Information Commissioner’s Office (ICO) is looking into the use of facial recognition technology in a commercial development in Kings Cross in London.

“Scanning people’s faces as they lawfully go about their daily lives, in order to identify them,” said the ICO in a statement, “is a potential threat to privacy that should concern us all.”

This article appeared in the South China Morning Post print edition as: Why we should be worried about AI-assisted facial recognition systemsDoes facial recognition tech infringe on our privacy and civil rights?
Post