California Governor Gavin Newsom has signed a landmark bill, SB 243, regulating AI companion chatbots to safeguard minors and vulnerable users from potential harms. The law, effective January 1, 2026, requires chatbot providers to implement safety protocols, including age verification, warnings, and crisis response systems. This move makes California the first state in the US to regulate AI companion chatbots, setting a precedent for potential federal and global standards.
The law mandates that chatbot providers clearly disclose that users are interacting with artificial intelligence, not humans. Additionally, chatbots cannot pose as healthcare professionals or engage in conversations involving suicidal ideation, self-harm, or sexually explicit content with minors. Companies must establish protocols to address suicide and self-harm cases, sharing these protocols with the California Department of Public Health. Failure to comply may result in fines of up to $250,000 per offense.
This legislation is a response to growing concerns over the risks associated with AI companion chatbots, including emotional dependency, self-harm content, and sexualized interactions. The bill gained momentum after several incidents, including the death of teenager Adam Raine, who took his own life after prolonged conversations with OpenAI's ChatGPT. Leaked internal documents also revealed that Meta's chatbots engaged in "romantic" and "sensual" chats with children, further highlighting the need for regulation.
The law's impact extends beyond California, potentially influencing federal and global standards for AI regulation. By prioritizing user safety and transparency, California is setting a precedent for responsible AI development and deployment. As the AI landscape continues to evolve, this legislation serves as a crucial step towards ensuring that AI technologies are developed and used in ways that prioritize human well-being and safety.