
Listen to this article in summarized format
Loading...
×The Centre's amendments to the Information Technology rules notified on Wednesday defined and brought artificial intelligence (AI) generated content under the legal ambit, mandating labelling requirements for AI content, and new obligations for both users and platforms. The amendments also sent shock waves in the social media industry by slashing the timeline to take down flagged unlawful content to three hours of being notified, from the existing deadline of 36 hours.
Set to come into effect on 20 February, the new IT rules also cut down the minimum time given to platforms for resolving user reported grievances to seven days, down from 15 days earlier, while mandating that non-consensual intimate imagery (NCII) has to be taken down within two hours, as opposed to 24 hours earlier. The move to compact the compliance timeline was necessitated by incidences of unlawful content going viral within hours of being posted, senior Ministry of Electronics and Information Technology (MeitY) officials said. "We have tried to address the broadly felt concern that the existing timeline was lengthy," an official said.
The latest measures have been brought in to effectively counter rising instances of child sexual abuse material (CSAM), deepfakes targeting women, and NCII on platforms, he added.
The IT rules define a wide range of information as unlawful content including those which are prohibited under any law relating to national sovereignty, integrity; state security, friendly relations with foreign countries, public order, decency or morality, contempt of court, defamation, and incitement to an offence, among others.
On whether platforms will be able to comply with such short timelines, Ashish Aggarwal, vice-president of policy at Nasscom said, "This is an area where industry and government will need to work closely. If legal obligations are not technically achievable, it creates a risk of unintentional non-compliance. We hope the government has assessed technical feasibility before finalising these obligations, but this is something that will need deeper engagement with industry,” he added saying that however there is a substantial improvement compared to the earlier draft.
“Earlier, the draft required all content with any AI modification or enhancement to be labelled. That was impractical. The revised guidelines now clearly focus on synthetically generated content that is intended to mislead or falsify information. That intent-based focus is a very positive change," he said.
AI labelling
Meanwhile, the updated rules also call for mandatory declaration from all social media users when posting AI-generated or modified content, and compel platforms to deploy technical measures to verify these, and prominently label both AI images and audio as such. Tech intermediaries also have to inform its users of the AI regulations every three months.
Officials chalked up the move to attempts to cull the rapid rise of AI-based deepfakes. While the draft amendments to the rules were released by the MeitY in October last year, they have been implemented a month after the Centre's tussle with social media platform X over its controversial AI chatbot Grok churning out controversial images. While X restricted the feature to only paying subscribers globally, the government had warned the move will not be enough to rein in obscene content, and asked it to remove all obscene content.
However, stringent draft amendments proposing at least 10% of the visual display area of a post, or the initial 10% of an audio clip’s duration, be devoted to AI disclaimers have been dropped. They also exempt 'good faith' and routine content edits that don't materially alter the substance. Instead, the 'prominent' labelling wherever applicable, has to be embedded with unique metadata to ensure instant identification.
Industry body Nasscom welcomed most of the new provisions, but said the 10-day runway to implement them will be a challenge. While the new AI provisions in the rules focus on only significant social media intermediaries (SSMIs) or those with 50 lakh (5 million) or more registered users in India, all technology intermediaries have to be part of the efforts, officials said. All technology firms would have to embed the disclaimer in their content whether they are a social media intermediary or are just providing software or services that leverage AI.
This opens up a long list of popular AI-based software, apps and services including OpenAI's ChatGPT, Dall-E, and Sora, Google's Gemini, NotebookLM, and Google Cloud, Microsoft's Copilot, Office365 and Azure, and Meta AI, to scrutiny.
"The definition now hinges on whether content appears real or is likely to be perceived as indistinguishable from real people or events, which introduces an inherently subjective, context-dependent standard that is difficult to translate into engineering rules," Aman Taneja, partner at Ikigai Law said.
All technological intermediaries have to identify and shut down users from posting AI-generated CSAM, NCII or any content that invades another person's privacy, or modifies false documents and electronic records. "These have been strictly pointed out so that people don't get away by simply posting an AI disclaimer," the official said.
Set to come into effect on 20 February, the new IT rules also cut down the minimum time given to platforms for resolving user reported grievances to seven days, down from 15 days earlier, while mandating that non-consensual intimate imagery (NCII) has to be taken down within two hours, as opposed to 24 hours earlier. The move to compact the compliance timeline was necessitated by incidences of unlawful content going viral within hours of being posted, senior Ministry of Electronics and Information Technology (MeitY) officials said. "We have tried to address the broadly felt concern that the existing timeline was lengthy," an official said.
The latest measures have been brought in to effectively counter rising instances of child sexual abuse material (CSAM), deepfakes targeting women, and NCII on platforms, he added.
The IT rules define a wide range of information as unlawful content including those which are prohibited under any law relating to national sovereignty, integrity; state security, friendly relations with foreign countries, public order, decency or morality, contempt of court, defamation, and incitement to an offence, among others.
On whether platforms will be able to comply with such short timelines, Ashish Aggarwal, vice-president of policy at Nasscom said, "This is an area where industry and government will need to work closely. If legal obligations are not technically achievable, it creates a risk of unintentional non-compliance. We hope the government has assessed technical feasibility before finalising these obligations, but this is something that will need deeper engagement with industry,” he added saying that however there is a substantial improvement compared to the earlier draft.
“Earlier, the draft required all content with any AI modification or enhancement to be labelled. That was impractical. The revised guidelines now clearly focus on synthetically generated content that is intended to mislead or falsify information. That intent-based focus is a very positive change," he said.
AI labelling
Meanwhile, the updated rules also call for mandatory declaration from all social media users when posting AI-generated or modified content, and compel platforms to deploy technical measures to verify these, and prominently label both AI images and audio as such. Tech intermediaries also have to inform its users of the AI regulations every three months.
Officials chalked up the move to attempts to cull the rapid rise of AI-based deepfakes. While the draft amendments to the rules were released by the MeitY in October last year, they have been implemented a month after the Centre's tussle with social media platform X over its controversial AI chatbot Grok churning out controversial images. While X restricted the feature to only paying subscribers globally, the government had warned the move will not be enough to rein in obscene content, and asked it to remove all obscene content.
However, stringent draft amendments proposing at least 10% of the visual display area of a post, or the initial 10% of an audio clip’s duration, be devoted to AI disclaimers have been dropped. They also exempt 'good faith' and routine content edits that don't materially alter the substance. Instead, the 'prominent' labelling wherever applicable, has to be embedded with unique metadata to ensure instant identification.
Industry body Nasscom welcomed most of the new provisions, but said the 10-day runway to implement them will be a challenge. While the new AI provisions in the rules focus on only significant social media intermediaries (SSMIs) or those with 50 lakh (5 million) or more registered users in India, all technology intermediaries have to be part of the efforts, officials said. All technology firms would have to embed the disclaimer in their content whether they are a social media intermediary or are just providing software or services that leverage AI.
This opens up a long list of popular AI-based software, apps and services including OpenAI's ChatGPT, Dall-E, and Sora, Google's Gemini, NotebookLM, and Google Cloud, Microsoft's Copilot, Office365 and Azure, and Meta AI, to scrutiny.
"The definition now hinges on whether content appears real or is likely to be perceived as indistinguishable from real people or events, which introduces an inherently subjective, context-dependent standard that is difficult to translate into engineering rules," Aman Taneja, partner at Ikigai Law said.
All technological intermediaries have to identify and shut down users from posting AI-generated CSAM, NCII or any content that invades another person's privacy, or modifies false documents and electronic records. "These have been strictly pointed out so that people don't get away by simply posting an AI disclaimer," the official said.






