How is MeitY Planning to Combat Deepfakes with Mandatory Labels?

Synopsis
Key Takeaways
- Mandatory labeling of AI-generated content is crucial for transparency.
- Platforms must embed permanent unique metadata to identify synthetic content.
- Failure to comply could lead to legal repercussions under the IT Act.
- The initiative aims to combat misinformation and impersonation.
- Feedback is welcomed until November 6, 2025.
New Delhi, Oct 22 (NationPress) The Ministry of Electronics and Information Technology has unveiled draft amendments to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which require platforms to distinctly identify "synthetically generated content" and implement new technical requirements for services that facilitate its creation.
These proposed changes provide a clear definition of “synthetically generated information” and mandate that platforms, particularly major social media intermediaries (SSMIs), label such content through metadata and visible or audible markings.
According to the IT rules, SSMIs include platforms with over 5 million registered users in India, such as Facebook, YouTube, and Snapchat.
The proposed regulation stipulates that social media platforms facilitating AI-generated content must ensure that the information is “prominently labelled or embedded with permanent unique metadata or identifier.”
This identifier must be visible or audible, “covering at least ten per cent of the surface area of the visual display or, in the case of audio content, during the initial ten per cent of its duration,” as stated in the rules.
The draft regulations also specify that the metadata or identifier must not be alterable, suppressed, or removed.
Should a platform knowingly allow unlabelled or incorrectly declared AI-generated content, it will be considered to have failed in exercising due diligence under the IT Act.
Additionally, platforms that facilitate the creation or modification of synthetic content are required to adopt technical measures to verify and declare if the uploaded content is AI-generated, according to the ministry.
The ministry emphasized that this initiative is part of its larger objective to ensure an “open, safe, trusted, and accountable Internet” while tackling the rising threats of misinformation, impersonation, and election manipulation fueled by generative AI.
Stakeholders are invited to provide feedback on the draft by November 6, 2025.
Earlier this month, the government selected five projects under the initiative launched by IndiaAI aimed at enhancing real-time deepfake detection, improving forensic analysis, and addressing other AI-related security challenges.