China’s Baidu turns to AI to police online content, but is the technology reliable?
Internet companies, including social network operators, are under increased scrutiny over content deemed inappropriate by Chinese authorities
Artificial intelligence (AI) has become a vital tool for internet companies in China to ensure they stay on the right side of censors, following a government crackdown on online content.
Baidu, operator of China’s largest online search engine, has stepped up efforts to ensure search results and other content, including news or video, that it sends to users are “clean” and “decent”, using the same cutting-edge technology for its ventures in autonomous driving and conversational devices.
The company has used AI to identify and remove click bait and vulgar content, said chief executive Robin Li Yanhong in a conference call with analysts on Friday.
“We proactively clean up indecent content and limit the volume of entertainment gossip in our news feed,” Li said after the Beijing-based company posted better-than-expected revenue in the first quarter on the back of its strong advertising business.
Nasdaq-listed Baidu forecast its second-quarter revenue above analysts’ estimates, as advertisers continue to flock to its news aggregation service and Netflix-like video streaming service.
“Upholding such product values may mean putting in greater effort to grow our users and revenue. But we believe such strategy will pay off in the long run,” said Li.
He did not elaborate on how the company decides which content to remove.
Policing online content, which has expanded because of the growth of social media, has become a major challenge to internet companies around the world. In the past few weeks, the Chinese government has launched a crackdown against various “inappropriate” online content.
Those include sensitive political news, lowbrow content, celebrity gossip and off-colour jokes deemed by Chinese authorities to be against socialist values. The crackdown has seen several of Baidu’s competitors being targeted, including popular news app operator Jinri Toutiao.
In the US, there is a growing debate over the responsibility of social networks, like Facebook, in policing hate speech and content amid wider concerns about social media’s impact on dividing society.
That has led more internet companies to meet the challenge by using AI. SenseTime, the world’s most valuable AI start-up, has taken a big step towards fully automating online censorship by introducing an AI-enabled content screening software earlier this week.
The product is able to automatically screen online videos that contain pornographic or violent images, as well as text containing messages deemed sensitive by the authorities, with an accuracy rate as high as 95 per cent.
Facebook is also increasingly relying on AI to monitor its service and identify content that violates the company’s policies and guidelines.
But the social network’s chief executive, Mark Zuckerberg, has said AI was still far from being perfect.
“It’s easier to build an AI system to detect a nipple than what is hate speech,” Zuckerberg said in a conference call with analysts on Wednesday, following Facebook’s first-quarter earnings report.
Using AI, Facebook is able to identify and remove about 99 per cent of terrorism-related content without a user having to notify the company first.
By contrast, it is likely to take years for AI to reliably recognise hate speech, he said. That difference is “frustrating”, Zuckerberg said.