Meta has announced stricter AI chatbot safety rules, blocking sensitive conversations with teenagers about suicide, self-harm, and eating disorders. Instead, teens will now be redirected to trusted helplines and expert resources. This move comes after growing public pressure and a wave of concern over how AI might be influencing vulnerable users.
The changes will affect teenage users across Meta's platforms, including Facebook and Instagram. Teens will only be able to access AI chatbots designed for educational or skill-development purposes. The company said the modifications are temporary and will roll out in the coming weeks across English-speaking countries.
The decision follows a Reuters investigation that revealed Meta's internal guidelines allowed chatbots to engage in emotionally charged or romantic exchanges with minors. One example showed a bot telling an eight-year-old, "Every inch of you is a masterpiece—a treasure I cherish deeply." The backlash was swift, with US Senator Josh Hawley launching a formal probe and child safety advocates calling for stricter oversight.
Meta responded by revising its policies and removing access to certain AI characters for teen users. The company's spokesperson stated that protections had been built into these AI tools from the beginning, but additional restrictions are being introduced as a precautionary step.
The debate around AI safety isn't limited to Meta. Other companies, like Elon Musk's xAI, have taken different approaches to addressing the risks of AI interacting with teens. While Meta's response was reactive but public-facing, driven by scrutiny and a desire to rebuild trust, xAI's approach has been quieter, framed as a feature rollout rather than a safety correction.