How is MeitY Planning to Combat Deepfakes with Mandatory Labels?

Click to start listening
How is MeitY Planning to Combat Deepfakes with Mandatory Labels?

Synopsis

The Ministry of Electronics and Information Technology is taking significant steps to tackle the growing concerns surrounding deepfakes. With the introduction of mandatory labels and metadata tagging for AI-generated content, the government aims to create a safer digital space. Discover how these new regulations will affect social media platforms and their users.

Key Takeaways

  1. Mandatory labeling of AI-generated content is crucial for transparency.
  2. Platforms must embed permanent unique metadata to identify synthetic content.
  3. Failure to comply could lead to legal repercussions under the IT Act.
  4. The initiative aims to combat misinformation and impersonation.
  5. Feedback is welcomed until November 6, 2025.

New Delhi, Oct 22 (NationPress) The Ministry of Electronics and Information Technology has unveiled draft amendments to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which require platforms to distinctly identify "synthetically generated content" and implement new technical requirements for services that facilitate its creation.

These proposed changes provide a clear definition of “synthetically generated information” and mandate that platforms, particularly major social media intermediaries (SSMIs), label such content through metadata and visible or audible markings.

According to the IT rules, SSMIs include platforms with over 5 million registered users in India, such as Facebook, YouTube, and Snapchat.

The proposed regulation stipulates that social media platforms facilitating AI-generated content must ensure that the information is “prominently labelled or embedded with permanent unique metadata or identifier.”

This identifier must be visible or audible, “covering at least ten per cent of the surface area of the visual display or, in the case of audio content, during the initial ten per cent of its duration,” as stated in the rules.

The draft regulations also specify that the metadata or identifier must not be alterable, suppressed, or removed.

Should a platform knowingly allow unlabelled or incorrectly declared AI-generated content, it will be considered to have failed in exercising due diligence under the IT Act.

Additionally, platforms that facilitate the creation or modification of synthetic content are required to adopt technical measures to verify and declare if the uploaded content is AI-generated, according to the ministry.

The ministry emphasized that this initiative is part of its larger objective to ensure an “open, safe, trusted, and accountable Internet” while tackling the rising threats of misinformation, impersonation, and election manipulation fueled by generative AI.

Stakeholders are invited to provide feedback on the draft by November 6, 2025.

Earlier this month, the government selected five projects under the initiative launched by IndiaAI aimed at enhancing real-time deepfake detection, improving forensic analysis, and addressing other AI-related security challenges.

Point of View

I believe these proposed amendments by the Ministry of Electronics and Information Technology are crucial for ensuring accountability in the digital landscape. By mandating clear labeling of AI-generated content, we are taking significant steps to protect users from misinformation and uphold the integrity of online communication. This initiative aligns with our responsibility to promote a safe and trustworthy digital ecosystem.
NationPress
22/10/2025

Frequently Asked Questions

What is the deadline for stakeholder feedback on the new draft?
Stakeholders can provide feedback on the draft amendments until November 6, 2025.
What defines a major social media intermediary (SSMI)?
A major social media intermediary (SSMI) is defined as a platform with more than 5 million registered users in India.
What are the consequences for platforms that fail to comply?
Platforms that knowingly allow unlabelled or falsely declared AI-generated content will be deemed to have failed in exercising due diligence under the IT Act.
What is the purpose of the mandatory labeling?
The mandatory labeling aims to identify synthetic content clearly to protect users from misinformation and ensure transparency.
How will the identifiers be implemented?
The identifiers must be visible or audible, covering at least ten percent of the display area or during the initial ten percent of audio content duration.
Nation Press