Are more parental controls for social media sites Instagram, Snapchat, TikTok and WhatsApp really improving child safety?
- Social media platforms are under pressure for the perceived harm they cause teens, and some have recently added new safety features
- Critics say these measures put the burden on parents while doing little to screen for age-inappropriate content

As concerns about social media’s harmful effects on teens continue to rise, platforms from Snapchat to TikTok to Instagram are bolting on new features they say will make their services safer and more age appropriate.
But the changes rarely address the elephant in the room – the algorithms pushing endless content that can drag anyone, not just teens, into harmful rabbit holes.
The tools do offer some help, such as blocking strangers from messaging children. But they also share some deeper flaws, starting with the fact that teenagers can get around limits if they lie about their age.
The platforms also place the burden of enforcement on parents. And they do little or nothing to screen for inappropriate and harmful material that can affect teens’ mental and physical well-being.

“These platforms know that their algorithms can sometimes be amplifying harmful content, and they’re not taking steps to stop that,” says Irene Ly, privacy counsel at the non-profit Common Sense Media.