As state officials increasingly adopt artificial intelligence (AI) to improve government services, they are being warned to proceed with caution. Chris Cantey, information systems manager at the Minnesota Legislative Coordinating Commission, likens the approach to wearing "safety goggles" to ensure responsible use and mitigate potential risks.
Cantey emphasizes the importance of establishing guardrails and policies for AI adoption in government. This includes considering how to track and record prompts used for AI tools to maintain transparency and accountability. Governments also need to manage ownership of AI-generated content and protect data and privacy of content used for AI services.
Education and training on AI technology and its potential risks are crucial for officials to use AI responsibly. Clear policies and guidelines for AI use can help ensure that AI is used in a way that benefits citizens and minimizes risks. Collaboration between technologists, policymakers, and other stakeholders is also essential for developing effective AI policies and guidelines.
Minnesota legislators have taken steps to address AI policies, particularly concerning deepfakes, and have enacted laws to regulate the use of facial recognition technology. Other states, such as Wisconsin, South Dakota, and North Dakota, have also introduced legislation to regulate AI use, highlighting the need for clear guidelines and regulations.
As AI technology continues to evolve, state officials must prioritize responsible AI adoption to ensure that its benefits are realized while minimizing its risks. By taking a cautious and informed approach, governments can harness the potential of AI to improve services and decision-making while protecting citizens' rights and interests.