Facial recognition researcher and Amazon spar over her findings that its artificial intelligence is biased
- Amazon dismissed what it called Joy Buolamwini’s “erroneous claims” and said the study confused facial analysis with facial recognition
- A coalition of researchers, including AI pioneer Yoshua Bengio, recent winner of the Turing Award, have criticised Amazon’s hostile response
Facial recognition technology was already seeping into everyday life – from your photos on Facebook to police scans of mugshots – when Joy Buolamwini noticed a serious glitch: Some of the software could not detect dark-skinned faces like hers.
That revelation sparked the Massachusetts Institute of Technology researcher to launch a project that is having an outsize influence on the debate over how artificial intelligence (AI) should be deployed in the real world.
Her tests on software created by brand-name tech firms such as Amazon uncovered much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.
Along the way, Buolamwini has spurred Microsoft and IBM to improve their systems and irked Amazon, which publicly attacked her research methods. On Wednesday, a group of AI scholars, including a winner of computer science’s top prize, launched a spirited defense of her work and called on Amazon to stop selling its facial recognition software to police.
Her work has also caught the attention of political leaders in statehouses and Congress and led some to seek limits on the use of computer vision tools to analyse human faces.
“There needs to be a choice,” said Buolamwini, a graduate student and researcher at MIT’s Media Lab. “Right now, what’s happening is these technologies are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it’s almost too late.”