image

Rohingya Muslims

Facebook turns to artificial intelligence to fight hate speech as tensions run high in Myanmar

Stung by claims that it has fuelled ethnic violence against the Rohingya in Myanmar, Facebook is stepping up efforts to prevent its network from being a source of hate speech and misinformation

PUBLISHED : Thursday, 16 August, 2018, 1:57pm
UPDATED : Thursday, 16 August, 2018, 10:24pm

Facebook has unveiled new efforts to combat hate speech and misinformation in Myanmar, where the social media platform has fuelled ethnic violence against the Rohingya population.

Facebook said in a blog post that employees travelled to Myanmar over the summer to better understand the situation.

It has also hired more than 60 Myanmar language experts to review content and plans to increase that to 100 by the end of the year.

Lawmakers, human rights activists and the United Nations have criticised the role Facebook has played in Myanmar’s crisis.

Facebook’s pledge to be more involved is part of its broader defence against the spread of controversial or false information on its network globally.

Facebook, Apple, YouTube and Spotify ban conspiracy theorist Alex Jones for hate speech

Chief Executive Mark Zuckerberg has pledged to hire more staff to review posts for hate speech.

But Facebook product manager Sara Su said that people alone are not able to catch all bad content. Much of Facebook’s effort relies on artificial intelligence (AI), which Zuckerberg has pointed to as a tool that social media companies can use to parse a high volume of posts and flag potential problems.

However, AI is far from capable of monitoring and evaluating hate speech or false information. Zuckerberg has said that it will take five or 10 years to train AI to recognise the nuances.

The technology is being tested in Myanmar, where a low rate of posts are flagged by Facebook users for potential policy violations.

Court rules grieving parents can inherit dead daughter’s Facebook account

Facebook on Wednesday said that artificial intelligence is now able to flag 52 per cent of all the content it removes in Myanmar before it is reported by users.

Facebook did not give an estimate of how many pieces of content it has removed, making it difficult to assess the scale of the problem.

But in an independent investigation, Reuters found over 1,000 posts, comments, images and videos calling for violence against the Rohingya people in the last week.

The company said it is also enforcing in Myanmar a recently updated policy addressing “credible violence”, which sets standards to remove content that has the “potential to contribute to imminent violence or physical harm”.

Why Facebook and Google's China dream will cost more than it pays

Facebook largely hesitates to remove misinformation across its network, preferring to demote false information in the news feed using its algorithms.

But it is willing to take a stronger hand in Myanmar due to the violence linked to misinformation. Facebook said it’s undertaking similarly focused enforcement strategies in Sri Lanka, India, Cameroon and the Central African Republic.