The Indian government, through the Ministry of Electronics and Information Technology (MeitY), has proposed a new rule requiring that all AI-generated content carry a continuous, clearly visible label throughout its entire duration. This means whether it’s a video, image, audio, or text, users must be constantly informed that what they are viewing has been created using artificial intelligence—not just at the beginning or in a caption.
This marks a major tightening of earlier rules, which only required “prominent visibility” of such labels. Under the new proposal, temporary disclosures—like a watermark shown for a few seconds—will no longer be enough. Instead, the label must remain visible at all times, ensuring there is no ambiguity for viewers during the entire playback or display.
The primary goal behind this move is to combat misinformation, deepfakes, and the growing trust deficit in online content. With AI tools making it easier to generate highly realistic fake content at scale, the government aims to ensure transparency at the point of consumption—so users can immediately पहचान (identify) whether content is synthetic or real.
However, the proposal also raises practical challenges. Platforms will need advanced systems to detect AI-generated content and apply labels consistently at scale, which could increase compliance costs and technical complexity. There are also questions about effectiveness—while labels may inform users, they may not always prevent the spread or impact of misleading content.
Overall, the proposal signals a shift toward stricter AI regulation in India, where transparency becomes mandatory rather than optional—potentially setting a global precedent for how synthetic content is handled in the digital ecosystem.