Top News

Need India AI risk framework, says academic Amlan Mohanty
ETtech | July 22, 2025 12:00 PM CST

Synopsis

As AI tools proliferate, experts urge stronger self-regulation in India, tailored to local risks and contexts. A proposed framework calls for classifying AI systems by risk, baseline protections, and voluntary commitments. Experts stress collecting real-world data, leveraging incentives, and building institutional support to ensure responsible, innovation-friendly AI development.

As new AI tools flood the market, experts are calling for stronger self-regulation to ensure safe and responsible use, supported by risk frameworks, government backing, and industry accountability.

Amlan Mohanty, an associate research fellow at the Centre for Responsible AI (CeRAI), IIT Madras, emphasised that a risk-based classification system is needed. This will determine which AI systems are “low risk” and “high risk,” he said.

Mohanty, also a non-resident fellow at Carnegie India and Niti Aayog who previously led public policy for Google in India, added that collecting real-world data on AI-related incidents in India is crucial for developing these frameworks.

In his recent paper, ‘Making AI Self-Regulation Work – Perspectives from India on Voluntary AI Risk Mitigation’, Mohanty outlined a practical policy roadmap for India, arguing that voluntary or self-regulation frameworks can help manage AI risks without stifling innovation.

“My suggestion would be to introduce baseline protections for all types of AI applications, while high-risk applications should be subject to additional rules,” he told ET. If an AI application causes harm or injury to an individual’s life or livelihood, it should be considered high-risk, the research explained.

B Ravindran, head, Wadhwani School of Data Science, said it is important to govern AI risks throughout their life cycles.
“Organisations involved in AI development should proactively embrace voluntary self-regulations to lead the AI adoption in India in a safe, ethical, and responsible manner mitigating adverse impacts on humans,” he said.

While the government has not formally proposed voluntary AI risk commitments, a draft report prepared by a committee set up by the principal scientific advisor and convened by Ministry of Electronics and Information Technology (MeitY), ‘AI for India-Specific Regulatory Framework,’ suggests that voluntary commitments could play a key role in the early phase of AI governance.

Defining risk, the Indian way

One of the key arguments in the paper is that India needs its own definition of AI-based risk classifications that are influenced by local, social and cultural factors.

“Do Indians worry about AI-based surveillance the same way Europeans do? Probably not. That’s why one cannot simply transpose the risk classification from the EU’s AI Act to Indian law. We need to collect real-world data from local communities,” Mohanty said.

The research stressed that reporting AI-related incidents will help identify the reasons behind harms caused and assess the nature of impact, whether physical, emotional, financial, or otherwise. This evidence-led approach will be key to designing a contextual and effective AI risk classification system.

Experts recommend a mix of financial, technical, and regulatory incentives to encourage the adoption of voluntary AI commitments.

For example, companies seeking grants under the India AI Mission could be required to adopt specific AI commitments as a condition for support. Leveraging the proposed AI Safety Institute in India to shape industry behaviour, develop benchmarks, and promote the use of AI safety tools is also seen as critical. Establishing a Technical Advisory Council could provide valuable expertise to government agencies, facilitate risk assessments, and support compliance efforts, experts said.

While India’s approach to AI regulation is pro-innovation, each country tailors its strategy to its own priorities. The US uses a decentralized, sector-specific approach; China emphasizes centralized information and content controls. The EU has put in place the AI Act, which imposes risk-based legal obligations.


READ NEXT
Cancel OK