India’s government has introduced new regulations requiring global social media platforms to remove unlawful or flagged content within just three hours of being notified, an aggressive shift designed to counter harmful and potentially AI-generated material online. The rule, which forms part of amendments to the country’s Information Technology guidelines, significantly shortens the previous window (36 hours) and applies to platforms like YouTube, X (formerly Twitter), Meta and others, creating compliance pressure for these big tech firms.
Officials explained that the accelerated takedown timeline is aimed at curbing the rapid spread of illegal content — including hate speech, misinformation and deepfakes — at a time when AI-enabled content creation tools are increasingly used to generate problematic material. The rule reflects India’s broader push for stronger governance of digital platforms and greater accountability from tech companies operating in the country.
Critics and technology experts, however, have warned that the strict three-hour deadline may be technically and operationally difficult to meet, especially for smaller platforms or those lacking extensive moderation infrastructure. Some argue that such tight timelines could lead to automated over-removal or raise concerns about censorship if human review processes are bypassed in the rush to comply.
India’s move aligns with broader global trends where governments are tightening oversight of digital speech and AI content. Similar efforts in the European Union and other jurisdictions emphasise transparency, provenance labelling and faster content moderation, though India’s latest rules stand out for their rapid compliance requirement and legal enforceability.