The Indian government is working on a voluntary code of conduct for companies using artificial intelligence (AI). Although specific details are scarce, this move reflects the government's efforts to ensure responsible AI development and deployment.
The need for proactive government support in AI has been highlighted by a recent report, which suggests that 86% of respondents believe India needs government intervention in AI for national security. This is particularly important given the significant cybersecurity challenges India faces, with over 2.3 million incidents reported in 2024, resulting in financial losses of ₹1,200 crore.
Public-private partnerships are seen as a potential solution to bridge the AI divide and ensure ethical integration of the technology. Many experts believe that collaborations between the government and private sector can help address the challenges associated with AI development and deployment.
Globally, there are varying approaches to AI regulation. The European Union has implemented the AI Act, which categorizes AI systems based on risk levels and ensures accountability. This comprehensive regulatory framework aims to mitigate risks while promoting innovation. In contrast, the United Arab Emirates has become the first country to use AI to write, review, and amend laws, marking a significant milestone in AI governance.
As India works on its voluntary code of conduct, it will be interesting to see how it balances innovation with safety and accountability. The development of this code is a crucial step in ensuring that AI is used responsibly and for the benefit of society.