Advertisement
Advertisement
Facebook
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
A memorial photograph of slain 11-month old girl Natalie is displayed at a temple in Phuket, Thailand, on April 27. She was murdered by her father who broadcast the killing on Facebook Live, before he killed himself. Photo: AFP

Online killings spur Facebook to hire 3,000 workers to vet videos for violence

Facebook

Facebook Inc will hire 3,000 more people over the next year to speed up the removal of videos showing murder, suicide and other violent acts, in its most dramatic move yet to combat the biggest threat to its valuable public image.

The hiring spree, announced by Chief Executive Mark Zuckerberg on Wednesday, comes after users were shocked by two video posts in April showing killings in Thailand and the United States.

The move is an acknowledgement by Facebook that it needs more than its recent focus on automated software to identify and remove such material.

Artificial intelligence techniques would take “a period of years ... to really reach the quality level that we want,” Zuckerberg told investors after the company’s earnings late on Wednesday.

“Given the importance of this, how quickly live video is growing, we wanted to make sure that we double down on this and make sure that we provide as safe of an experience for the community as we can,” he said.
Jiranuch Trirat (left is comforted by friends at the funeral of her 11-month-old daughter Natalie in Phuket, Thailand, on April 2. Natalie was murdered by her father in a harrowing video he broadcast live on Facebook before committing suicide. Photo: AFP

The problem has become more pressing since the introduction last year of Facebook Live, a service that allows any of Facebook’s 1.9 billion monthly users to broadcast video, which has been marred by some violent scenes.

Some violence on Facebook is inevitable given its size, researchers say, but the company has been attacked for its slow response.

UK lawmakers this week accused social media companies including Facebook of doing a “shameful” job removing child abuse and other potentially illegal material.

In Germany, the company has been under pressure to be quicker and more accurate in removing illegal hate speech and to clamp down on so-called fake news.

German lawmakers have threatened fines if the company cannot remove at least 70 per cent of offending posts within 24 hours.
Facebook CEO Mark Zuckerberg says it will be several years before artificial intelligence can appropriately vet violent videos. Instead, Facebook is hiring thousands of workers to do so. Photo: TNS

So far, Facebook has avoided political fallout from US lawmakers or any significant loss of the advertisers it depends on for revenue. Some in the ad industry have defended Facebook, citing the difficulty of policing material from its many users. Police agencies have said Facebook works well with them.

Facebook shares fell slightly on Wednesday, and edged lower still after the bell, following its quarterly earnings.

Zuckerberg, the company’s co-founder, said in a Facebook post the workers will be in addition to the 4,500 people who already review posts that may violate its terms of service. Facebook has 17,000 employees overall, not including contractors.

Last week, a father in Thailand broadcast himself killing his baby daughter by hanging on Facebook Live, police said. After more than a day, and 370,000 views, Facebook removed the video. A video of a man shooting and killing an elderly stranger on a street in Cleveland last month also shocked viewers.

Zuckerberg said the company would do better: “We’re working to make these videos easier to report so we can take the right action sooner - whether that’s responding quickly when someone needs help or taking a post down.”

The 3,000 workers will be new positions and will monitor all Facebook content, not just live videos, the company said. The company did not say where the jobs would be located, although Zuckerberg said the team operates around the world.

The world’s largest social network has been turning to artificial intelligence to try to automate the process of finding pornography, violence and other potentially offensive material. In March, the company said it planned to use such technology to help spot users with suicidal tendencies and get them assistance.

However, Facebook still relies largely on its users to report problematic material. It receives millions of reports from users each week, and like other large Silicon Valley companies, it relies on thousands of human monitors to review the reports.

“Despite industry claims to the contrary, I don’t know of any computational mechanism that can adequately, accurately, 100 per cent do this work in lieu of humans. We’re just not there yet technologically,” said Sarah Roberts, a professor of information studies at UCLA who looks at content monitoring.

The workers who monitor material generally work on contract in places such as India and the Philippines, and they face difficult working conditions because of the hours they spend making quick decisions while sifting through traumatic material, Roberts said in an interview.

Facebook says that every person reviewing its content is offered psychological support and wellness resources, and that the company has a support program in place.

When Facebook launched its live service in April 2016, Zuckerberg spoke about it as a place for “raw and visceral” communication.

“Because it’s live, there is no way it can be curated,” Zuckerberg told BuzzFeed News in an interview then. “And because of that it frees people up to be themselves. It’s live; it can’t possibly be perfectly planned out ahead of time.”

Since then, at least 50 criminal or violent incidents have been broadcast over Facebook Live, including assault, murder and suicide, The Wall Street Journal reported in March.

I

This article appeared in the South China Morning Post print edition as: Facebook to hire 3,000 new workers to help solve its violent video problem
Post