Top News

Click, label, share: How India wants to tame AI-generated content on social media
ET CONTRIBUTORS | October 25, 2025 2:40 AM CST

Synopsis

India's proposed IT Rules 2021 amendments mandate labeling for synthetic AI-generated content on social media. While aiming for transparency, the rules face challenges with user comprehension, technical detection hurdles, and potential burdens on the AI industry. Effective enforcement and clearer guidance are crucial for success.

Hunting isn’t easy

Subimal Bhattacharjee

Subimal Bhattacharjee

Commentator on digital policy issues

On Oct 22, GoI proposed sweeping amendments to IT Rules 2021, focusing on synthetically generated information (SGI).
Draft amendments mandate that AI or AI-generated content on social media platforms be labelled or embedded with a permanent, unique identifier. Platforms will also be required to obtain a user declaration confirming whether the uploaded content qualifies as SGI.

Can mandatory labelling requirements keep pace with synthetic content? MeitY's answer: defining AI-generated info and mandating prominent labels covering 10% of visual content or audio duration represents a clear choice: transparency over prohibition.

The argument is sound. Yet, transparency hinges on a key assumption: users will consume labelled content rationally and make informed judgements. Anyone who has seen misinformation run rampant on WhatsApp knows how optimistic that assumption is.

The real test is: will people notice these labels or will they fade into digital wallpaper - present but ignored? From cigarette warnings to social media fact checks, labels tend to lose impact over time.

The draft's boldest provision, Rule 4(1A), obliges significant social media intermediaries to collect user declarations on SGI and verify them through 'reasonable and appropriate technical measures'. This is promising, but it can create a compliance nightmare.

The technical hurdle is steep: platforms must detect AI-generated content across formats, languages and manipulation types - yet, current detection tools are unreliable.

The draft amendments limit verification to public content, sparing private messages. This means platforms must distinguish public from private content at upload, yet, privately shared material can go public via screenshots or forwards.

Mandating platforms to verify user declarations puts them in a bind. Under-enforcement risks losing safe-harbour protections and incurring liability, while over-enforcement can censor legitimate content. The rule states platforms will be deemed to have failed due diligence if they 'knowingly permitted, promoted, or failed to act upon' unlabelled synthetic content. Yet, the word 'knowingly', in algorithmic content moderation contexts, is murky at best.

Rule 3(3) mandates that any intermediary offering resources enabling SGI creation must ensure permanent, non- removable labels. This applies not just to deepfake apps but potentially to any photo-editing tool, video production software or creative AI platform.

This has real implications for India's emerging AI industry. A startup building an AI video-editing tool would now need to design its product around mandatory labelling that persists through every export, share and modification. For consumer apps, this friction could make products globally uncompetitive. For B2B tools, it raises questions about commercial confidentiality - must every AI-enhanced corporate video carry a permanent SGI label?

These rules fail to distinguish between malicious deepfakes and benign creative uses. Collapsing these distinctions treats all synthetic content as suspicious. Moreover, the rules reflect a broader regulatory philosophy: when in doubt, maximise transparency and let users decide. But users aren't equipped for this.

Perhaps amendments' most glaring weakness is enforcement. How will MeitY assess whether platforms use 'reasonable and appropriate' technical measures? Who audits if labels hit 10% visibility threshold? And what happens when automated verification fails?

Takedown amendments - mandating JS-level approval and monthly secretary reviews - show that the GoI values senior oversight for rights-sensitive decisions. Yet, SGI verification is left to platform self-regulation, with only the threat of losing safe-harbour protection. The asymmetry is striking.

International experience offers cautionary tales. The EU's Digital Services Act has sparked debates about over- compliance and automated censorship. China's watermarking mandates face implementation difficulties and selective enforcement concerns. India's rules risk similar pitfalls without clear guidance on acceptable technical standards and margins for error.

Here's what needs to be done:

  • Platforms need standardised labels that are recognisable across services.
  • Digital literacy initiatives must teach citizens to notice labels and adjust information consumption accordingly.
  • Detection tech underlying verification systems must become more reliable, requiring sustained research investment.
  • GoI needs clearer safe harbours for innovation. Content labelled at creation shouldn't require relabelling across platforms, and creative or educational uses could have lighter requirements than factual claims. Accountability can coexist with nuance.

Effective regulation needs more than good intentions and legal definitions. It demands technological feasibility, industry cooperation, international coordination, and realistic expectations about what labels can achieve in an attention economy built for engagement, not accuracy.

Success isn't measured by compliance or label volume. The true metric is whether public trust in digital information grows - a goal that regulation alone cannot achieve.
The writer is a commentator ondigital policy issues
(Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com.)


READ NEXT
Cancel OK