Meta is introducing new parental controls on Instagram to address concerns about AI safety and teen interactions with AI chatbots. Starting early 2026, parents will be able to disable one-on-one chats between teens and AI characters, block specific AI characters, and receive insights into the topics their teens discuss with AI chatbots. This move comes amid growing concerns about AI's impact on teen mental health and safety, including lawsuits alleging AI companies' role in teen suicides.
The new controls will provide parents with more visibility and control over their teens' AI interactions, while maintaining age-appropriate learning opportunities. Meta's AI assistant will remain available to teens, with default age-appropriate protections in place, even if parents disable chats with AI characters. Additionally, teen accounts on Instagram will be restricted to PG-13 content by default, and teens won't be able to change these settings without parental permission.
Meta's move is part of a broader effort to address criticism and regulatory scrutiny over AI safety and teen interactions. The company has faced criticism for allowing provocative conversations between AI chatbots and minors, and has since strengthened its safeguards to prevent AI characters from engaging in inappropriate discussions with teens. The new controls will be available in English in the US, UK, Canada, and Australia, and Meta plans to expand these features across its other platforms in the future.
While advocacy groups welcome Meta's efforts, they emphasize that more needs to be done to protect teens. James Steyer, Common Sense Media founder and CEO, states, "Meta's new parental controls on Instagram are an insufficient, reactive concession that wouldn't be necessary if Meta had been proactive about protecting kids in the first place". As AI technology continues to evolve, Meta's move sets a precedent for other companies to enhance their own safeguards and prioritize teen safety.