Meta is rolling out new parental controls to improve teen safety on its platforms, allowing parents to disable their teens' private chats with AI characters. This move comes amid growing concerns over how AI interactions affect minors, particularly after criticism over flirty AI chatbots. The new tools, detailed by Instagram head Adam Mosseri and Chief AI Officer Alexandr Wang, will debut on Instagram early next year in the US, UK, Canada, and Australia. Parents will also be able to block specific AI characters and view broad topics their teens discuss with chatbots and Meta's AI assistant, without turning off AI access entirely.
The AI assistant will remain available with age-appropriate defaults even if parents disable one-on-one chats with AI characters. Meta's AI characters are designed not to engage in age-inappropriate discussions about self-harm, suicide, or disordered eating with teens. The company uses AI signals to place suspected teens into protection, even if they claim to be adults. This update follows reports that many safety features Meta implemented on Instagram over the years do not work well or exist, highlighting the need for more robust safeguards.
Meta's move is part of a broader effort to strengthen protections within its platforms, including expanded protections in direct messages, enhanced nudity filters, and new measures for adult-run accounts featuring children. The company has removed 135,000 Instagram accounts for sexualizing child-focused content and is sharing data with other platforms to identify and remove exploitative content. These changes come as regulatory scrutiny intensifies, with Australia preparing to enforce tougher age restrictions for social media platforms.
By giving parents more control over AI interactions, Meta aims to balance innovation with responsibility and rebuild trust. The company emphasizes that AI tools can be valuable for learning and creativity but acknowledges the need for robust safeguards. As AI becomes increasingly integrated into daily life, Meta's move reflects a growing industry trend towards responsible AI development and deployment, prioritizing ethical considerations alongside technological advancement.