Instagram and Facebook’s parent company Meta is adding some new parental supervision tools and privacy features to its platforms as social media companies face increasing scrutiny over their effects on teen mental health.
But many of the features require minors — and their parents — to opt in, raising questions about how effective the measures are. Instagram, for instance, will now send a notice to teens after they have blocked someone, encouraging them to let their parents “supervise” their account. The idea is to grab kids’ attention when they might be more open to parental guidance.
If a teen opts in, the system will let parents set time limits, see who their kid follows or is followed by, and allows them to track how much time the minor spends on Instagram. It does not let parents see message content.
Instagram launched parental supervision tools last year to help families navigate the platform and find resources and guidance. A sticking point in the process is that kids need to sign up if they want parents to supervise their accounts. It’s not clear how many teen users have opted in and Meta has not disclosed any numbers.
Such supervision allows parents to see how many friends their child has in common with accounts the child follows or is followed by. So if the child is followed by someone none of their friends follow, it could raise a red flag that the teen does not know the person in real life.
This, Meta says, “will help parents understand how well their teen knows these accounts, and help prompt offline conversations about those connections.”
Meta is also adding parental supervision tools already available on Instagram and on virtual reality product to Messenger. The opt-in feature lets parents see how much time their child spends on the messaging service and information such as their contact lists and privacy settings — but not who they are chatting with, for instance.
Such features can be useful for families in which parents are already involved in their child’s online life and activities. Experts say that’s not the reality for many people.
Last month, U.S. Surgeon General Vivek Murthy warned that there is not enough evidence to show that social media is safe for children and teens and called on tech companies to take “immediate action to protect kids now.”
Murthy told The Associated Press that while he recognizes social media companies have taken some steps to make their platforms safer, those actions are not enough. For instance, while kids under 13 are technically banned from social media, many younger children access Instagram, TikTok and other apps by lying about their age, either with or without their parents’ permission.
Murthy also said it’s unfair to expect parents to manage what their children do with rapidly evolving technology that “fundamentally changes how their kids think about themselves, how they build friendships, how they experience the world — and technology, by the way, that prior generations never had to manage,”
“We’re putting all of that on the shoulders of parents, which is just simply not fair,” Murthy said.
Also beginning Tuesday, Meta will encourage — but not force — children to take a break from Facebook, just as it already does on Instagram. After 20 minutes, teenage users will get a notice to take time away from the app. If they want to keep scrolling, they can just close the notification. TikTok also recently introduced a 60-minute time limit for users under 18, but they can bypass it by entering a passcode, set either by the teens themselves, or if the child is under 13, by their parent.
“What we are focused on is kind of a suite of tools to support parents and teens on how they how can they can best engage in safe and appropriate experiences online,” said Diana Williams, who oversees product changes for youth and families at Meta. “We’re also trying to build tools that teens can use themselves to learn how to manage and recognize how they’re spending their time. So things like ‘take a break’ and ‘quiet mode’ in the evenings.”