Advertisement
Advertisement
Hundreds of scientists and medical professionals recently petitioned Spotify demanding that it tackle Covid-19 misinformation. Photo: Reuters
Opinion
Edward Pinkney
Edward Pinkney

Spotify row shows how moderation can play a role in stopping online abuse, Covid-19 misinformation

  • Scientists and health care workers are calling on platforms to do more to combat harassment and misinformation as the pandemic enters its third year
  • While there are genuine concerns, we must take care to preserve the discourse and not quash people’s passion and engagement

As the Year of the Tiger gets under way, public health professionals are starting to bare their claws over the state of online discourse. The science journal Nature recently published a comment piece expressing concern about social media abuse experienced by scientists during the pandemic, calling on platforms to do more.

This followed a letter from hundreds of scientists and medical professionals to the music and podcasting service Spotify demanding it tackle Covid-19 misinformation. Concerns over online harassment and misinformation are not new, but discontent seems to be reaching fever pitch.
In 2015, my postgraduate thesis at the University of Hong Kong examined two main approaches to moderating content in online mental health communities. The first approach was one principally of deleting content and banning users, labelled “negative moderation”, a reference to essayist Isaiah Berlin’s concept of negative liberty.

The second approach of “positive moderation” – not in the glib sense of happiness but of something that is additive – refers to one based on balancing out disagreeable content through reasoned criticism and contextualisation.

The thesis argued that, in communities trying to further knowledge and improve discourse, positive forms of moderation must take precedence over blocks and bans. Today, the idea of a clear distinction between negative and positive approaches looks simplistic, with social media platforms increasingly using warning labels and fact-checkers.

‘It’s as bad as ever’: climate change denial still rages on social media

These can sometimes seem arbitrary and lacking in transparency, but they serve a purpose of moderating exchanges without entirely shutting them down. It would improve public trust if appointed fact-checkers were independent organisations rather than news outlets.
Still, some think it is overly utopian to suggest that a content moderation policy that doesn’t involve fingers hovering over the delete button can be viable. Certainly there are interactions that are beyond being reasoned with.

There is nothing civil about hurling personal insults at a doctor, not to mention wishing them dead. Premier League football referees will no doubt testify to how unpleasant targeted abuse can be for the best part of 90 minutes, let alone every time they look at their phone.

With that said, there are cases where positive moderation does apply to ongoing debates. For instance, what if those petitioning Spotify to tackle podcast misinformation had instead appealed to the podcast host to engage in a discussion about Covid-19 facts?

The Spotify page for “The Joe Rogan Experience” podcast. The head of Spotify Daniel Ek has condemned the podcaster’s repeated use of racial slurs but insisted that silencing him is not the answer. Photo: AFP

In fairness to the authors, they did call for Spotify to publish a moderation policy rather than simply to ban, and perhaps they also reached out to the podcast host. It is not clear from the letter.

It might seem naive to try and engage with purveyors of misinformation, but circumventing dialogue as the first port of call does not bode well for proponents of an open society. What is clear from much of the furore around online content is how poorly prepared our public health practitioners are for dealing with torrents of conflicting information and abuse.

If something they deemed misinformation was shared around the dinner table, they might engage in debate. If subjected to abuse on a street, they would cross the road. It would be a rare situation that security staff or police would need to be involved.

The online environment, however, remains something of a wild terrain for which professionals thrust into the limelight by the pandemic are ill-prepared. Academics used to intimate lecture rooms of 50 or 100 people are now grappling with hundreds of millions of Twitter users.

China, Russia sowing disinformation about Western vaccines, says EU report

The two or three troublemakers and gadflies in a typical lecture hall who could be reasoned with or might have some interesting points have become a million chatterers. Is it any wonder then that, two years into the pandemic, academics are exhausted by what they are encountering online and are ready to say enough is enough?

For clinicians, the circumstances are no less tiresome. After months and even years of overloaded emergency departments and Covid-19 wards, witnessing grief and trauma daily, there is often little patience left for those unwilling to adhere to official guidance.
For some, this loss of empathy and heightened sensitivity to online content may be a symptom of compassion fatigue, a phenomenon more familiar in combat veterans.

Clinicians need better support and guidance. Just as public scientists might receive media training for how to engage with journalists, tools and skills for dealing with online discourse ought to also be provided, including how to maintain healthy boundaries and protect oneself from abuse.

To return to the football match analogy, there are circumstances when fans need to be removed or when players or managers deserve a red card. However there is also a recognition that tempers will rise in heated environments.

A degree of tolerance is necessary to avoid suppressing healthy exchanges, and prudence is important for recognising how to de-escalate rather than inflame tensions. To delete content and ban users might create the appearance of a more sanitised and palatable environment.

Yet, it will also leave segments of disaffected or questioning populations behind and risks creating information silos where dominant theories go unchallenged. In the end, this will be self-defeating.

One of the symbols in global health is the snake – which adorns the World Health Organization logo – a reference to the Greek god of medicine, Asclepius. In the tradition of the Chinese zodiac, the snake is incompatible with the tiger. Let’s hope this is not an omen for the year ahead but, rather, a reminder for the public health field to look after itself and tread carefully.

Edward Pinkney is president of the University of Hong Kong Public Health Alumni Society

Post