The US Federal Trade Commission (FTC) is launching an investigation into the potential risks posed by AI-powered chatbots, particularly to children's mental health and privacy. The inquiry will focus on how these chatbots store and share user data, and whether existing protections are sufficient to prevent harm or exploitative behavior.
The FTC will demand internal documents from major tech firms, including OpenAI, Meta Platforms, (link unavailable), and Alphabet Inc.'s Google. The investigation will examine potential privacy harms, mental health risks, and data storage and sharing practices.
The inquiry follows growing concerns and complaints about AI chatbots, including allegations of unlicensed practice of medicine, deceptive trade practices, and privacy violations. Some AI platforms have been accused of enabling "therapy bots" without proper medical supervision, misleading children with AI-generated mental health services, and compromising users' personal data.
The FTC's investigation aims to assess potential harms and determine whether further regulatory action is necessary to protect children and ensure responsible AI development. By examining the practices of major tech firms, the FTC seeks to understand the risks associated with AI-powered chatbots and ensure that companies are taking adequate measures to mitigate these risks.