The use of artificial intelligence among children and teenagers has grown rapidly, raising concerns among parents, educators, and policymakers about safety, privacy, and long-term developmental effects. Many teens are now regularly interacting with generative AI tools for learning, entertainment, and social engagement. However, most parents feel that schools and institutions have not done enough to explain how these tools work or how young users can engage with them safely, creating a widening gap between adoption and oversight.
In response, several U.S. states have begun introducing or passing laws aimed at protecting minors from potential AI-related harms. These efforts focus on issues such as data privacy, exposure to harmful content, and transparency around how AI systems interact with young users. A growing number of states are considering AI-specific child safety measures, but progress varies significantly, resulting in a fragmented regulatory landscape across the country.
Some states have moved quickly with proposals or enacted laws, while others have struggled to advance legislation due to political, legal, or technical challenges. This uneven progress means that protections for children and teens can differ widely depending on where they live. As AI technologies evolve faster than legislation, critics argue that state-level action, while important, may not be sufficient on its own to address the scale and speed of the issue.
At the national level, debates continue over how federal AI policies might interact with state laws, particularly around whether new frameworks could limit states’ ability to enact child-focused protections. Child safety advocates stress that timely regulation is essential, warning that delays could leave young users vulnerable to emerging risks. The broader consensus is that safeguarding children in an AI-driven world will require coordinated action from lawmakers, educators, parents, and technology companies alike.