New Delhi is taking a firm stand against misleading content. Legal experts have warmly welcomed the government’s revised guidelines on AI-generated deepfakes, calling them clearer and more practical for social media platforms.
The Ministry of Electronics and Information Technology (MeitY) has updated the Information Technology Rules, 2021, shifting focus from labeling every AI-created content to targeting only deceptive materials. Previously, platforms like Facebook, Instagram, and YouTube faced demands for visible labels on all AI content. Now, the emphasis is on content that misleads or harms users.
Under the new rules, AI-generated content must carry clear markers—either visible labels or embedded metadata—so users can easily identify synthetic media. This empowers regulators to monitor deepfakes and intervene when necessary, promoting informed consumption of online information.
Sajai Singh, Partner at JSA Advocates & Solicitors, highlighted the shift: ‘These guidelines are a significant improvement over earlier drafts. By prioritizing misleading content, they offer social media companies workable solutions without overburdening legitimate AI uses.’
Key changes include a reduced takedown timeline: Platforms must remove government or court-flagged deepfakes within 3 hours, down from 36 hours. Once applied, AI labels cannot be removed or hidden. Companies are also mandated to deploy automated tools to detect and block illegal, obscene, or fraudulent AI content.
This balanced approach addresses rising concerns over deepfakes in elections, misinformation campaigns, and personal harms. As AI technology advances, these guidelines set a proactive framework for digital trust in India, balancing innovation with public safety.