New York State Comptroller Thomas DiNapoli has raised concerns about the lack of robust artificial intelligence (AI) regulations in the state. A recent audit revealed that New York's centralized guidance and oversight of AI use are inadequate, posing risks of irresponsible deployment.
The state's AI policy lacks detailed guidance, leaving agencies to interpret responsible AI use on their own. This has resulted in inconsistent and potentially problematic AI deployments across various state agencies. For instance, the Department of Motor Vehicles (DMV) exempted facial recognition software from AI oversight, contradicting the Office of Information Technology Services' (ITS) determination.
Furthermore, staff training on AI risks and biases is insufficient, potentially leading to noncompliance and unintended consequences. The Office for the Aging (NYSOFA) employs AI-powered voice-activated devices to combat loneliness without ensuring human oversight or data security. Similarly, the Department of Corrections and Community Supervision (DOCCS) uses AI to monitor inmate phone calls without addressing potential AI risks or biases.
The audit also found that ITS doesn't have a comprehensive inventory of AI systems used by state agencies. This lack of oversight and accountability raises concerns about the potential misuse of AI technologies.
DiNapoli's audit recommends strengthening the AI policy, providing training, and implementing AI governance structures. The state's first chief AI officer, Shreya Amin, is working to establish a robust AI governance framework, emphasizing education, training, and agency collaboration. As AI technologies continue to evolve, it is essential for New York State to develop and implement effective guardrails to ensure responsible AI use and mitigate potential risks.