Advertisement
Advertisement
Facebook
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Facebook is monitoring which users have a propensity to flag content published by others as problematic, and which publishers are considered trustworthy by users. Photo: Reuters

Facebook is rating the trustworthiness of its users to fight fake news

The company has long relied on its users to report problematic content, but some began falsely reporting items as untrue

Facebook

Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to one.

The previously unreported ratings system, which Facebook has developed over the last year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors.

Facebook developed its reputation assessments as part of its effort against fake news, Tessa Lyons, the product manager who is in charge of fighting misinformation, said in an interview.

Watch: Facebook uncovers ‘coordinated’ campaign to disrupt US midterm elections

The company, like others in the tech sector, has long relied on its users to report problematic content – but as Facebook gave people more options, some began falsely reporting items as untrue, a new twist on information warfare that the company had to account for.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher”, said Lyons.

A user’s trustworthiness score is not meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there a single unified reputation score that users are assigned.

Rather, the score is one measurement among thousands of new behavioural clues that Facebook now takes into account as it seeks to understand risk.

Facebook is also monitoring which users have a propensity to flag content published by others as problematic, and which publishers are considered trustworthy by users.

It is unclear what other criteria Facebook measures to determine a user’s score, whether all users have a score, or even how they are used.

The reputation assessments come as Silicon Valley, faced with Russian meddling, fake news, and ideological actors that abuse the company’s policies, is re-calibrating its approach to risk – and is finding untested, algorithmically-driven ways to understand who poses a threat.

Some of the Facebook ads linked to a Russian effort to disrupt the 2016 US political process released by members of the US House Intelligence Committee in November last year. Photo: AP

Twitter, for example, now factors in the behaviour of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.

Still, how these new credibility systems work is highly opaque, and the companies are wary of discussing them, in part because doing so might invite further gaming – a predicament that the firms increasingly find themselves in as they weigh calls for more transparency about their decision-making.

“Not knowing how [Facebook is] judging us is what makes us uncomfortable,” said Claire Wardle, director of First Draft, a research lab within Harvard’s Kennedy School of Government that focuses on the impact of misinformation and is a fact-checking partner of Facebook, of the efforts to assess people’s credibility.

“But the irony is that they can’t tell us how they are judging us – because if they do the algorithms that they built will be gamed.”

The system Facebook built for users to flag potentially unacceptable content has in many ways become a battleground.

The activist Twitter account Sleeping Giants called on followers to take technology companies to task over the conservative conspiracy theorist Alex Jones and his Infowars site, leading to a flood of reports about hate speech that resulted in him and Infowars being banned from Facebook and other tech companies’ services.

At the time, executives at the company questioned whether the mass reporting of Jones’ content was part of an effort to trick Facebook’s systems.

A screen displaying the Twitter account of conservative conspiracy theorist Alex Jones taken on August 15. Photo: AFP

False reporting has also become a tactic in far right online harassment campaigns, experts say.

Tech companies have a long history of using algorithms to make predictions about people, from how likely they are to buy products to whether they are using false identities.

But with the backdrop of increased misinformation, now the companies are making increasingly sophisticated editorial choices about who is trustworthy.

One of the signals we use is how people interact with articles
Tessa Lyons, Facebook product manager

In 2015, Facebook gave users the ability to report posts they believe to be false. A tab on the upper right hand corner of every Facebook post lets readers report problematic content for a variety of reasons, including pornography, violence, unauthorised sales, hate speech and false news.

Lyons said that she soon realised that many people were reporting posts as false simply because they did not agree with the content. Because Facebook forwards posts that are marked as false to third-party fact-checkers, she said it was important to build systems to assess whether the posts were likely to be false in order to make efficient use of fact-checkers’ time.

That led her team to develop ways to assess whether the people who were flagging posts as false were themselves trustworthy.

“One of the signals we use is how people interact with articles,” Lyons said in a follow-up email.

“For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”

The score is one signal among many that the company feeds into more algorithms to help it decide which stories should be reviewed.

“I like to make the joke that, if people only reported things that were [actually] false, this job would be so easy!” Lyons said. “People often report things that they just disagree with.”

She declined to say what other signals the company used to determine trustworthiness, citing concerns about tipping off bad actors.

Post