India Tightens Rules on AI Deepfake Content, Shortens Takedowns
India has tightened its digital content rules to curb AI-generated deepfake and synthetic media. Platforms must clearly label such content and remove flagged material within three hours, reflecting rising concern over misinformation and online harm as amended IT rules take effect from February 2026.

India has tightened its digital content rules to curb AI-generated deepfake and synthetic media. Platforms must clearly label such content and remove flagged material within three hours, reflecting rising concern over misinformation and online harm as amended IT rules take effect from February 2026.
New Delhi: India has tightened its digital content rules to regulate AI-generated and deepfake material, introducing mandatory labelling and faster removal timelines for online platforms.
The government amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules to address the growing misuse of synthetic media. Social media platforms must now label AI-generated content and act faster when authorities flag unlawful material.
Platforms Must Label AI-Generated Content
The revised rules define AI-generated or synthetically generated content as digital audio, visual or combined media created or altered using automated tools. Such content often appears authentic despite not being human made.
Online platforms must clearly label this material so users can easily identify it. The government said this step will improve transparency and limit the spread of misleading content.
The requirement applies to major social media platforms and online intermediaries that host user-generated content.
Three-Hour Deadline for Content Removal
The amendments significantly reduce the time platforms have to remove flagged content. Companies must now act within three hours of receiving a notice from a court or a competent authority.
Earlier rules allowed up to 36 hours for compliance.
The tighter deadline applies to deepfakes and other illegal material, including child sexual abuse content and extremist material. Authorities say quick action is essential to prevent harm and misuse.
Higher Responsibility for Intermediaries
The updated rules also increase accountability for online platforms. Companies must deploy tools to detect harmful or deceptive AI-generated content more effectively.
Platforms that fail to comply risk losing legal protections under intermediary guidelines. They may also face legal action for repeated violations.
The government published the amendments in the official gazette. The rules will take effect on February 20, 2026.
Why This Matters
Experts say the changes reflect rising concern over misinformation driven by advanced AI tools. Deepfakes can mislead users, damage reputations and influence public opinion.
Supporters believe stricter rules will help protect users and improve trust online.
Critics, however, warn that the three-hour deadline could be difficult to meet. They argue platforms may remove content too quickly to avoid penalties, which could affect free expression.