India's recent proposal to mandate the labelling of AI-generated content has sparked discussions among experts about its feasibility and effectiveness. The draft amendments to the IT Rules 2021 require social media platforms to obtain user declarations on whether uploads are synthetically generated, deploy automated verification tools, and visibly label such content before publication. The labels are intended to cover at least 10% of the visual frame or the first 10% of audio duration, ensuring that viewers are aware when they are watching or hearing something created by AI.
However, industry analysts and legal experts express concerns about the practicality of these requirements. Sindhuja Kashyap, partner at King Stubb & Kasiva, notes that current AI-detection systems often have accuracy rates between 60–80%, which may not be sufficient for compliance. This poses a significant challenge, especially for smaller intermediaries who might find it nearly impossible to meet these standards. Additionally, the draft places the burden of compliance squarely on intermediaries, who could lose their safe harbour protections under the IT Act if they fail to detect or label synthetic content. With billions of uploads every month, errors are inevitable, raising questions about the fairness and effectiveness of such regulations.
Experts suggest that instead of a blanket mandate, a risk-based approach focusing on high-impact synthetic content may be more realistic. This would allow for more targeted and manageable enforcement, reducing the burden on platforms while addressing the most significant risks associated with AI-generated content.
In conclusion, while the intent behind the proposed labelling rules is commendable, the technological and practical challenges highlighted by experts indicate that a more nuanced and feasible approach may be necessary to effectively manage AI-generated content.