Meta's internal guidelines for its AI chatbots have sparked serious concerns about child safety. According to internal documents reviewed by Reuters, Meta allowed its AI chatbots to engage in romantic or sensual conversations with children. The leaked document outlined guidelines for Meta's generative AI assistant and chatbots across Facebook, WhatsApp, and Instagram.
The guidelines revealed troubling examples, including permissible conversations that included romantic physical imagery with children. Additionally, Meta's AI could generate false statements, such as articles claiming a British royal has a sexually transmitted disease, as long as the information was labeled as false. The rules also permitted images of adults being punched or kicked but banned depictions involving death or extreme gore.
Furthermore, the guidelines allowed AI to create statements demeaning people based on protected characteristics, like race, with an example permitting the statement "Black people are dumber than white people". This has raised serious concerns about the potential harm that Meta's AI chatbots could cause.
In response to the controversy, Meta has removed these guidelines and prohibited flirtatious or romantic chats with children, allowing access to its AI bots only for users aged 13 and older. However, child safety advocates remain skeptical, demanding public release of updated AI safety guidelines for independent oversight.
The controversy highlights the need for stricter regulations and enhanced oversight in AI development, particularly when interacting with vulnerable populations like children. Lawmakers and advocacy groups are expected to push for new federal rules governing AI chatbot behavior toward minors.