India’s government has issued new directives requiring social media platforms and intermediaries to identify and regulate artificial intelligence‑generated content more transparently and responsibly. Under updated Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, to take effect on February 20, 2026, platforms must implement systems to detect and manage AI‑created or altered content and use automated tools to block unlawful, misleading, or sexually exploitative material.
A key change is the formal introduction of a definition for “synthetically generated information.” The rules clarify that this includes audio, visual, or audiovisual content created or modified using computer tools in a way that makes it appear real or authentic. Routine editing or enhancement that does not materially alter meaning or context is not treated as synthetic content under this definition.
The directive mandates that AI‑generated material that is lawful must be clearly labeled and carry embedded identifiers indicating its synthetic origin. Platforms are instructed to ensure such content has visible labels and metadata markers — including unique identifiers — to promote transparency for users about what they are viewing online.
Alongside labelling requirements, intermediaries must deploy “reasonable and appropriate technical measures” — such as automated tools or other mechanisms — to prevent unlawful synthetic content from being created or shared on their networks. These steps reflect India’s broader effort to tackle misinformation, deepen user protections, and hold tech platforms accountable under evolving digital safety norms.