Byte back: Big Tech makes toxic social media worse, so governments should step in
- ‘Facebook Papers’ show reforms desperately needed including nuance and aggression filters and ‘nudges’ from policymakers
- Facebook engineers gave some emojis five times the weight of a traditional ‘like’ despite knowing more clickbait, abuse could follow
Here we are again having yet another debate on how social media is broken, and whether it can be fixed. This time, the global conversation is fuelled by hard evidence.
Like others who have a love-hate relationship with social media, in recent weeks I have spent an inordinate amount of time parsing the Facebook Papers – the series of articles published by a consortium of 17 US news outlets on the company’s struggles and inertia in dealing with harm caused by its apps.
The reports, based on a vast trove of documents supplied by ex-Facebook insider-turned-whistle-blower Frances Haugen, have triggered fresh hand-wringing among authorities on how to regulate Big Tech.
Haugen has also turned over the documents to authorities and testified before lawmakers in the US, Britain and Europe.
Speaking to British politicians, Haugen made a sobering pronouncement: propagating anger and hate, she said, was the “easiest way to grow on Facebook”.
For me, among the most startling details from the Facebook Papers was the revelation that for three years, from 2017 to 2019, the firm’s engineers gave emoji reactions such as “love”, “haha”, “sad” and “angry” five times the weight of a traditional “like”.
Facebook went ahead with the plan despite its own researchers noting that this policy of favouring “controversial” posts could lead to more spam, clickbait and abuse.
Later, data showed that posts that elicited the angry emoji were disproportionately likely to include “misinformation, toxicity and low quality news,” reported The Washington Post, one of the members of the reporting consortium.
The policy meant “Facebook for three years systematically amped up some of the worst of its platform, making it more prominent in users’ feeds and spreading it to a much wider audience,” the newspaper said.
Although Facebook later rectified the flaw, the revelation offers hard evidence to researchers’ years-old hypothesis that while social media platforms claim they are neutral, they do, in fact, amplify and reward outrage.
In August, before Haugen began leaking the documents, Yale University published a study of 12.7 million tweets from 7,331 users that found that users who received more “likes” and “retweets” when they expressed outrage in a tweet were more likely to express further outrage in later tweets.
These observations invariably must lead to some reforms down the road.
One interesting proposal worth consideration is the introduction of nuance and aggression filters to platforms. The filters will allow users to weed out posts by those who strip away complexities and are too confrontational in their posts.
Political analyst Mark Brolin, the author of Healing Broken Democracies: All You Need to Know About Populism, suggests such dials will “dampen the power of extremes” where people are encouraged to say outrageous things online.
Instead, if there is a disincentive – so gratuitously evoking outrage means you reach fewer people – it could lead to healthier online discourse.
Here in Asia, it would bode well for governments to nudge internet companies to urgently consider such reforms instead of allowing them to evolve at their own pace.