India’s Digital Personal Data Protection Act, 2023 (DPDPA) is becoming increasingly important in the age of artificial intelligence, even though the law does not directly mention AI. The legislation focuses on how organizations collect, process, store, and reuse personal data — an issue that lies at the center of modern AI systems. Since AI models rely heavily on massive amounts of data for training, profiling, personalization, and automation, experts say the DPDPA will significantly shape how AI technologies are developed and deployed in India.
One of the biggest concerns involves “purpose limitation,” a principle under the DPDPA that requires organizations to clearly define why personal data is being collected and how it will be used. AI systems often reuse the same data for multiple functions such as identity verification, recommendation engines, fraud detection, advertising, and predictive analytics. Legal experts argue that this creates risks of over-collection and misuse if companies fail to obtain proper consent or transparently explain how AI systems process personal information.
The rise of generative AI has also introduced new privacy and security challenges. AI systems can now create synthetic voices, deepfakes, cloned identities, and fabricated content that may cause reputational, financial, or emotional harm. The article highlights examples such as facial-recognition systems, AI-powered profiling tools, and generative chatbots that collect large amounts of sensitive data. Experts warn that AI governance must go beyond traditional data collection rules and also address algorithmic inference, impersonation, misinformation, and AI-generated harms at scale.
India is currently moving toward a broader AI governance framework that combines the DPDPA with emerging guidelines for responsible AI development. Policymakers are emphasizing principles such as transparency, accountability, fairness, human-centric design, and safety. Rather than introducing a single strict AI law like the European Union’s AI Act, India appears to favor a flexible framework where existing sector regulators oversee AI use within their industries. Experts believe the success of this approach will depend on balancing innovation with strong protections for privacy, consent, and digital trust.