Blunt laws can’t eradicate fake news at its root, but education and other tools can
- Legislation is open to abuse and can stifle innovation. More importantly, it cannot effectively fight the complex issue of misinformation, given the scale of the internet
- Collaborative efforts to set up fact-checking programmes and voluntary codes, invest in digital literacy training, and develop tech tools to flag fake news are a better solution
Misinformation is a complex social issue that predates the advent of the internet and social media. However, with the rising prevalence of sharing information online, misinformation poses a new set of challenges for all involved.
These laws often grant broad power to the government and lack transparency in the process of how decisions are made by governments when implementing the law. The laws are open to misuse or abuse – used to silence dissent and opposition – and they can stifle innovation.
Experts have argued that blunt legislation is not the most effective solution to the complex issue of misinformation, given the scale of the internet and the difficulty and subjectivity of discerning whether information is true or false. Instead, a long-term multistakeholder approach is needed.
This includes a commitment to digital, media and information literacy, including critical thinking skills, which is vital to dealing with online misinformation in a sustainable manner.
Digital education must be built into the school curriculum from the earliest years and become as commonplace in the classroom as maths or science. Stakeholders should also work together to develop targeted programmes for more vulnerable groups, such as the elderly and communities that are coming online for the first time.
Similarly, supporting the work of independent journalists and fact-checkers ensures that citizens have access to important context and counter-narratives, to enable them to make informed decisions about what to read and trust online. Funding and training for journalists should be part of a whole-of-society response.
Over the years, members of the Asia Internet Coalition, as well as other internet companies, have developed initiatives to combat misinformation, often in partnerships with civil society, journalists and others in the information ecosystem.
These collaborations include establishing and maintaining fact-checking programmes, conducting research into the issue, and investing in the development and roll-out of digital literacy training to people in the region.
There are processes and product features in place to prevent the spread of misinformation on online platforms, as well as wide-ranging policies that cover some of the most harmful content types. These policies are continuously updated to keep pace with changing behaviours on the internet.
Internet companies have also invested in tools for users, organisations and governments to “flag” online content they believe is inappropriate, and they use a combination of reports from users, human oversight and artificial intelligence to identify and remove this content.
Some platforms have information notices that give users more context and a wider variety of authoritative sources to enable them to apply critical thinking to the information they see online.
Instead of legislation that could impact freedom of expression, access to information, and potentially stifle innovation, governments should look towards initiatives and mechanisms, including self-regulation, that take into consideration the complexities of the issue and provide the flexibility needed to deal with a constantly-evolving challenge like misinformation.
The signatories to the voluntary and self-regulatory code commit to safeguards to protect Australians against online misinformation and disinformation. The code is outcome-based, which means that signatories are able to implement measures appropriate to their respective platforms.
In May, the signatories published their inaugural transparency reports under the code, where they outlined how they protect Australians from misinformation online, as well as statistical information of actions taken so far in combating misinformation on their various platforms.
In other parts of Asia, we are seeing successful multi-stakeholder approaches with government, digital platforms, and non-profit groups working together to fight disinformation by raising awareness about misinformation, promoting digital literacy, and establishing fact-checking programmes.
Blunt legislation is unlikely to effectively address a highly complex and nuanced problem in the long term. Instead, “fake news” legislation is likely to harm freedom of expression and speech, and dampen public debate and exchange of ideas, information and knowledge – a fundamental feature of successful digital economies.
To ensure that the internet remains a safe place for innovation, knowledge and business to thrive, industry, government and policymakers, the media and publishers, academia, and users themselves, must be part of the solution.
Promising, alternative solutions have emerged. Pioneered by collaboratives bringing together stakeholders across the information ecosystem, these solutions – ranging from technology tools that reduce the prevalence and impact of misinformation, and efforts to promote media and digital literacy, to fact-checking programmes and voluntary codes – must underpin the way forward in this battle against misinformation.
Jeff Paine is managing director of the Asia Internet Coalition