Advertisement

Microsoft’s AI Twitter bot gets yanked offline after spouting racist, sexist tweets

Reading Time:2 minutes
Why you can trust SCMP
The Twitter profile for TayTweets. The AI chatbot was designed to learn from its interactions with fellow Twitter users, but instead it ended up parroting racist and sexist invective. Photo: Twitter

Tay, Microsoft’s so-called chatbot that uses artificial intelligence to engage with millennials on Twitter, lasted less than a day before it was taken offline, having launched a barrage of racist and sexist comments in response to other Twitter users.

Advertisement
TayTweets (@TayandYou), which began tweeting on Wednesday, was designed to become “smarter” as more users interacted with it, according to its Twitter biography. But it was shut down by Microsoft early on Thursday after it made a series of wildly inappropriate tweets.

A Microsoft representative said on Thursday that the company was “making adjustments” to the chatbot while the account is quiet.

“Unfortunately, within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways,” the representative said in a written statement, without elaborating.

According to Tay’s “about” page linked to the Twitter profile, “Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding.”

Advertisement

While Tay began its Twitter tenure with a handful of innocuous tweets, the account quickly devolved into a bullhorn for hate speech, repeating anti-Semitic, racist and sexist invective hurled its way by other Twitter users.

loading
Advertisement