Santa Clara officials have sparked a debate over the accuracy of AI-generated information after using ChatGPT in two public meetings. Councilmember Kelly Cox used ChatGPT to review legal contracts for the FIFA 2026 World Cup games, while Mayor Lisa Gillmor utilized it to research potential competitors to merchandiser Fanatics.
The use of AI tools like ChatGPT in government meetings has raised concerns about reliability and accountability. Councilmember Kevin Park, who has worked in AI companies for a decade, warns that AI can provide false information if not properly trained or understood. He emphasizes that AI generates responses based on likelihood rather than fact-checking.
Councilmember Suds Jain compares using ChatGPT to searching on Google, seeing it as a productivity enhancer. Experts like Mindy Romero, director of the University of Southern California's Center for Inclusive Democracy, suggest that officials should disclose when they use AI for research and scrutinize AI tools more thoroughly.
The city of Santa Clara is part of the Government AI Coalition, which aims to develop guidelines for AI usage. Santa Clara has its own AI best practices work group and expects to finalize a policy later this year. Justice BR Gavai of the Supreme Court cautions against relying solely on AI for legal research due to risks of generating fake case citations and fabricated legal facts.
As AI becomes increasingly integrated into government decision-making, ensuring the accuracy and reliability of AI-generated information is crucial. The debate highlights the need for transparency, accountability, and responsible AI use in governance.