Free speech means different things to different people. No one anywhere in the world truly has the right to say what they like; there are always limits. But the decision by several social media platforms to close the accounts of United States President Donald Trump in the wake of rioting by his supporters at the nation’s seat of legislative power in which five people were killed was not a matter of censorship. It was a business decision by private companies mindful of the need to prevent incitement of violence and in the process, protecting their reputations and financial well-being. As Trump’s backers were gathering in Washington to protest against confirmation of his rival Joe Biden as president, he told them on social media that “if you don’t fight like hell, you’re not going to have a country any more”. Later, he said he loved them and that they were “very special”. As important as the leader of the world’s most powerful nation may be, he does not have the right to incite violence. Facebook, Twitter and other platforms took the understandable step of silencing his hate speech. The following day, amid an outcry over his behaviour, he posted a video calling the attack on the Capitol building “heinous”. There is a broader issue of whether companies that have become so much a part of the everyday lives of billions of people should be so powerful. What were originally envisaged as websites to keep friends, family and others connected have become technological giants with ever-bigger financial strength and influence. Their algorithms and censoring committees span the world determining what can and cannot be posted, often in what seems like an arbitrary manner. Trump’s tweets silenced: has Twitter lost its best advertisement? The platforms long resisted efforts to curb their content, but in the face of growing public pressure, have been gradually relenting. They are now acting against violent extremism, shutting down fake accounts and posting alerts to information that is either not true or of a dubious nature. They have developed community standards and release reports on content moderation. Harmful speech online has to be moderated. But there is also a need for greater transparency and understanding of what is acceptable from place to place.