California has taken a significant step towards regulating AI companion chatbots with the passage of Senate Bill 243 (SB 243), which aims to protect minors and vulnerable users from potential harm. The bill, introduced by state senators Steve Padilla and Josh Becker, has garnered bipartisan support and now awaits Governor Gavin Newsom's signature by October 12.
The bill's provisions are designed to mitigate the risks associated with AI chatbots, particularly for minors. For instance, AI companion chatbots will be required to prevent conversations involving suicidal ideation, self-harm, or sexually explicit content. Additionally, minors will receive recurring alerts every three hours, reminding them that they are interacting with an AI and encouraging them to take breaks.
Companies will also be required to submit annual transparency reports starting July 1, 2027, detailing the mental health risks associated with their chatbots. Furthermore, individuals will have the right to file lawsuits against companies for violations, seeking damages of up to $1,000 per incident, injunctive relief, and attorney's fees.
The bill's passage follows growing concerns over the impact of AI chatbots on children's mental health. The Federal Trade Commission (FTC) has also launched an investigation into the safety practices of major AI companies, including Google, Meta, and OpenAI.
If signed into law, SB 243 would make California the first state to comprehensively regulate AI companion chatbots, setting a precedent for the rest of the country. The law would take effect on January 1, 2026, giving companies a deadline to implement the necessary safety protocols.