Advertisement
Advertisement
South Korea
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Lee Luda, the AI chatbot of South Korean start-up Scatter Lab. Photo: Handout

South Korean firm suspends LGBT-hating chatbot over hate speech

  • Lee Luda, which has the persona of a 20-year-old female student, picked up language patterns from 10 billion conversations on Kakao Talk
  • In one instance, Lee said it ‘despised’ gays, while in another conversation, it said it would ‘rather die’ than live as a handicapped person
South Korea
A popular AI-driven chatbot in South Korea with the persona of a 20-year-old female student was taken down this week after it was accused of bigotry towards sexual minorities, the #MeToo movement and the disabled.

Lee Luda, developed by Seoul-based start-up Scatter Lab to operate within Facebook Messenger, became an instant sensation for her spontaneous and natural reactions, attracting more than 750,000 users after its launch late last month.

Luda’s AI algorithms learned from data collected from 10 billion conversations on Kakao Talk, the country’s top messenger app.

Too white, too male: scientist stakes out inclusive future for AI

But the chatbot has been rapidly embroiled in a spate of allegations that it used hate speech towards women and ethnic minorities, triggering a controversy that eventually forced the developer to take it offline.

In one of the captured chat shots, Luda said it “despised” gays and lesbians. In another, Luda said it “hated” Black people.

When asked about transgender people, Luda replied: “You are driving me mad. Don’t repeat the same question. I said I don’t like them.”

In another conversation, she said people behind the #MeToo movement were “just ignorant”, noting: “I absolutely scorn it.”

In remarks about people with disabilities, Luda said it would “rather die” than live as a handicapped person.

The developer apologised over the remarks in a statement, saying they “do not represent our values as a company”.

The comments stem from the database of billions of conversations that the AI programme learned from.

The company said it had tried to prevent such gaffes during a six-month trial before the launch, but without success.

“Lee Luda is an AI like a kid just learning to have a conversation. It has a long way to go before learning many things,” it said in a statement before silencing Luda on Tuesday.

“We will educate Luda to make a judgment on what answers are appropriate and better rather than learning from chats unconditionally,” the company said, without giving a timetable for her return to service.

Beware rise of technology-fuelled racism in Asia: UN report

The chatbot has attracted increasingly strong criticism over the past year. The founder of Daum, a popular South Korean web portal, in January wrote a post on Facebook urging Scatter Lab to take down Luda to filter out hate speech.

In 2016, Microsoft pulled its AI chatbot off Twitter after it began launching a barrage of racist and sexist comments in response to other users on the social media platform. TayTweets, which was designed to become “smarter” as more users interacted with it, was taken offline less than a day after it was launched.
A study in 2017 found that as machines get closer to acquiring human-like language abilities, they are also absorbing deeply ingrained biases concealed within the patterns of language use.
Post