Policymakers and Lawmakers Eyeing New Regulations to Restrict Monetization of AI Mental Health Chats

Policymakers and Lawmakers Eyeing New Regulations to Restrict Monetization of AI Mental Health Chats

A Forbes analysis highlights growing concern among policymakers and lawmakers about how artificial intelligence (AI) systems that offer mental health advice are being monetized — often without clear legal boundaries. Millions of people use generative AI and large language models (LLMs) to seek guidance on emotional and psychological issues, but many companies are collecting and potentially selling insights derived from those private conversations to third parties looking to target users with personalized products and services. Critics argue this trade of sensitive inference data raises ethical and privacy questions that current laws don’t adequately address.

The article explains that although many users freely share detailed personal information with AI systems, there are no comprehensive federal protections governing how that data can be used commercially. AI makers’ licensing agreements typically allow them to reuse user prompts and content for training and development, and inferences about mental states could be sold to advertisers, career coaches, or other commercial interests without users realizing. This practice has prompted some lawmakers to consider new regulations limiting how mental health–related AI interactions can be monetized.

At the state level, several U.S. states have already enacted laws targeting AI in mental health contexts, including Illinois, Nevada and Utah, which restrict or regulate how AI systems can present themselves or be used in mental health care settings. These laws often focus on ensuring AI doesn’t claim to provide professional therapeutic services or require clear disclosures that a chatbot is not human. Lawmakers contemplating further regulation want to address not just safety and accuracy, but also how sensitive conversational data is harvested and commercialized.

Supporters of tighter rules argue that allowing companies to profit from intimate mental health chats without robust consent or oversight could lead to exploitation and privacy harms, especially as AI use in emotional and psychological support increases. However, crafting regulation is challenging because lawmakers must balance consumer protection with preserving beneficial uses of AI in providing accessible guidance — aiming to ensure safeguards and transparency without suppressing innovation or the positive potential of AI tools.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.