Are more parental controls for social media sites Instagram, Snapchat, TikTok and WhatsApp really improving child safety?
- Social media platforms are under pressure for the perceived harm they cause teens, and some have recently added new safety features
- Critics say these measures put the burden on parents while doing little to screen for age-inappropriate content
As concerns about social media’s harmful effects on teens continue to rise, platforms from Snapchat to TikTok to Instagram are bolting on new features they say will make their services safer and more age appropriate.
But the changes rarely address the elephant in the room – the algorithms pushing endless content that can drag anyone, not just teens, into harmful rabbit holes.
The tools do offer some help, such as blocking strangers from messaging children. But they also share some deeper flaws, starting with the fact that teenagers can get around limits if they lie about their age.
The platforms also place the burden of enforcement on parents. And they do little or nothing to screen for inappropriate and harmful material that can affect teens’ mental and physical well-being.
“These platforms know that their algorithms can sometimes be amplifying harmful content, and they’re not taking steps to stop that,” says Irene Ly, privacy counsel at the non-profit Common Sense Media.
The more teens keep scrolling, the more engaged they get – and the more engaged they are, the more profitable they are to the platforms, she adds. “I don’t think they have too much incentive to be changing that.”
Take, for instance, Snapchat, which this week introduced new parental controls in what it calls the “Family Center” – a tool that lets parents see who their teens are messaging, though not the content of the messages. One catch: both parents and their children have to opt for the service.
Nona Farahnik Yadegar, Snap’s director of platform policy and social impact, likens it to parents wanting to know who their kids are going out with.
If kids are headed out to a friend’s house or are meeting up, she said, parents will typically ask, “Hey, who are you going to meet up with? How do you know them?”
The new tool, she said, aims to give parents “the insight they really want to have in order to have these conversations with their teen while preserving teen privacy and autonomy”.
These conversations, experts agree, are important. In an ideal world, parents would regularly sit down with their kids and have honest talks about social media and the dangers and pitfalls of the online world.
But many kids use a bewildering variety of platforms, all of which are constantly evolving – and that stacks the odds against parents expected to master and monitor the controls on multiple platforms, said Josh Golin, executive director of children’s digital advocacy group Fairplay.
“Far better to require platforms to make their platforms safer by design and default instead of increasing the workload on already overburdened parents,” he says.
Farahnik Yadegar said Snapchat has “strong measures” to deter kids from falsely claiming to be over 13. Those caught lying about their age have their account immediately deleted, she said. Those who are over 13 but pretend to be even older get one chance to correct their age.
Detecting such lies isn’t foolproof, but the platforms have several ways to get to the truth. For instance, if a user’s friends are mostly in their early teens, it’s likely that the user is also a teenager, even if they said they were born in 1968 when they signed up.
Companies use artificial intelligence to look for age mismatches. A person’s interests might also reveal their real age. And, Farahnik Yadegar pointed out, parents might also find out their kids were fibbing about their birth date if they try to turn on parental controls but find their teens ineligible.
Child safety and teen mental health in the United States are part of Democratic and Republican critiques of tech companies. States in the US, which have been much more aggressive about regulating technology companies than the federal government, are also turning their attention to the matter.
In March, several state attorneys general launched a nationwide investigation into TikTok and its possible harmful effects on young users’ mental health.
TikTok is the most popular social app among US teenagers, according to a new report from the Pew Research Centre, which found that 67 per cent say they use the Chinese-owned video sharing platform.
The company has said that it focuses on age-appropriate experiences, noting that some features, such as direct messaging, are not available to younger users.
It says features such as a screen-time management tool help young people and parents moderate how long children spend on the app and what they see. But critics note such controls are leaky at best.
“It’s really easy for kids to try to get past these features and just go off on their own,” says Ly.
Instagram, which is owned by Facebook parent Meta, is the second most popular app with teens, Pew found, with 62 per cent saying they use it, followed by Snapchat with 59 per cent.
Not surprisingly, only 32 per cent of teens reported ever having used Facebook, down from 71 per cent in 2014 and 2015, according to the report.
In autumn 2021, former Facebook employee-turned whistle-blower Frances Haugen exposed internal research from the company concluding that the social network’s attention-seeking algorithms contributed to mental health and emotional problems among Instagram-using teens, especially girls.
That revelation led to some changes; Meta, for instance, scrapped plans for an Instagram version aimed at kids under 13. The company has also introduced new parental control and teen well-being features, such as nudging teens to take a break if they scroll for too long.
Such solutions, Ly says, are “sort of getting at the problem, but basically going around it and not getting to the root cause of it”.