Instagram’s New Safety Features: What You Need to Know
Instagram recently announced a series of updates to improve teenage users' safety and privacy. These changes are part of Meta's ongoing efforts to address concerns about the well-being of younger users on its platform.
The biggest shift? Accounts for users under 16 will now be private by default. Additionally, these accounts, referred to as "Teen Accounts," will have enhanced messaging restrictions and offensive content filters. Parental approval is now required to modify these settings, offering parents more control over their child's social media experience.
Naomi Gleit, Meta’s head of product, noted that the updates aim to address parents' three core concerns: unwanted contact, inappropriate interactions, and managing screen time. Instagram will also implement a refreshed "take a break" feature to remind teens to leave the app after extended use.
These updates respond to growing criticism about Instagram’s impact on youth mental health and safety. The platform has been under scrutiny for allegedly contributing to issues such as the sexualization of children and fueling mental health challenges among young users. As a result, several states have sued Meta, accusing it of creating "dopamine-driven" features that foster addictive behavior among teens. Despite Meta’s efforts, Instagram remains at the center of a broader conversation about online safety for minors.
Recommended by LinkedIn
Meta’s new features, including parental controls, seem designed to preempt stricter regulations. These features now allow parents to see who their teens have been messaging and what topics they are exploring on the platform. While this can help parents stay informed, it raises concerns about potential misuse, particularly in households where teens may need more privacy to explore personal beliefs or identities.
Meta has also introduced AI-driven systems to verify the age of users and detect those who may lie about their age when signing up. Additionally, Instagram has partnered with Yoti, a company that analyzes facial data to estimate a user’s age, providing an extra layer of protection for younger users.
Despite these advancements, Instagram’s parental controls remain optional and require parents and teens to opt in. This has led to criticism from child safety advocates who believe the responsibility should lie more with the platform than the parents.
As Meta continues introducing new safety measures, balancing parental oversight with teen autonomy remains challenging. With social media playing a critical role in young people's lives, the debate over how to protect them while respecting their freedom is far from over.
Automation Consultant & Creator
2moGreat article! Meta seems to have drug their feet about doing this. I have seen a lot of criticism of different facial-age-recognition vendors having very unreliable tech behind their products. Ultimately, I believe this move by Meta is a band-aid on a much larger problem. If they want Instagram to be a teen friendly place, there's a lot of content they need to do a better job of enforcing Community Guidelines on - because teens can access quite a lot of borderline explicit material (that may be one bio-link away from the real-deal). Also, teens and elderly are probably the most vulnerable to social-engineering by the less-than-well-intentioned, Meta needs to implement changes that make the entire platform safer for all.