A new report urges state and local government leaders to take a more active role in shaping how artificial intelligence is regulated and used in public services. As AI adoption accelerates, the report argues that leaders cannot afford to be passive, warning that unchecked deployment could create risks around privacy, security, and public trust. At the same time, it acknowledges that many governments are already experimenting with AI to improve efficiency and service delivery.
The report highlights growing tension between state-level efforts to regulate AI and broader national policies that could limit local authority. It emphasizes that policymakers at all levels have a narrow window to establish guardrails that ensure AI systems are transparent, accountable, and aligned with public values. Without clear rules, the technology could outpace the ability of institutions to manage its societal impact.
AI’s potential benefits for government operations are also emphasized, including faster service delivery, better forecasting of community needs, and streamlined administrative processes. The report encourages leaders to adopt AI thoughtfully by starting with simple, well-defined use cases and collaborating with technology experts, civic organizations, and internal innovation teams to ensure systems are designed with residents in mind.
However, the report warns that poorly governed AI could amplify misinformation, threaten election integrity, and erode data privacy. To avoid these outcomes, it calls for stronger oversight, workforce training, and policies that keep humans responsible for critical decisions. Ultimately, the message is clear: leaders have an urgent responsibility to ensure AI strengthens democratic institutions rather than undermining them.