As artificial intelligence rapidly spreads across sectors such as healthcare, education, finance, and labour markets, an important question arises: who should govern AI systems? Traditionally, technology regulation has been handled mainly by governments and regulatory bodies. However, the pace at which AI is being developed and deployed has outstripped the ability of conventional regulatory frameworks to monitor and control its impact effectively.
A key concern is that while private companies and technical experts develop AI systems, the risks and consequences are often borne by society as a whole. AI systems can continue evolving even after deployment, which makes them difficult to regulate through static laws and traditional oversight methods. This creates an imbalance where knowledge and power remain concentrated among developers while the public faces the social, economic, and ethical effects of these technologies.
To address this gap, the article suggests adopting participatory governance in AI regulation. This approach would involve citizens, civil society groups, researchers, and academic institutions in monitoring and evaluating AI systems. Such participation could help identify issues that developers may overlook, including cultural biases, regional differences, or social harms that emerge when AI systems are used in real-world contexts. Community-led audits and broader stakeholder involvement could strengthen transparency and accountability in AI systems.
Ultimately, effective AI governance requires collaboration between governments, private companies, and society. If AI regulation remains confined to closed technical or bureaucratic processes, it risks deepening inequalities and weakening democratic oversight. By building institutions that allow public participation, improving transparency, and expanding AI literacy, governments can ensure that AI systems align with societal values rather than narrow institutional or commercial interests.