-
Advertisement
Artificial intelligence
World

Artificial intelligence is learning how to be sexist and racist from us, study reveals

AI language programs are associating maleness and femaleness with different abilities, and linking African American names with unpleasant words

Reading Time:3 minutes
Why you can trust SCMP
An engineer holds the head of Samantha, a sex doll packed with artificial intelligence providing her the capability to respond to different scenarios and verbal stimulus, in his house in Rubi, north of Barcelona, Spain. Photo: Reuters
The Guardian

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

Advertisement

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”
An engineer adjusts cables at the head of Samantha, a sex doll packed with artificial intelligence, in his house in Rubi, north of Barcelona, Spain. Photo: Reuters
An engineer adjusts cables at the head of Samantha, a sex doll packed with artificial intelligence, in his house in Rubi, north of Barcelona, Spain. Photo: Reuters
Advertisement

But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.

Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x