Top News

Govern AI before it governs you
ET CONTRIBUTORS | November 19, 2025 4:00 AM CST

Synopsis

India has unveiled updated guidelines for AI governance that strike a harmony between fostering innovation and ensuring accountability. Companies are urged to implement AI in a responsible manner framed within clear regulations. With AI-related mishaps leading to hefty financial setbacks, countries around the globe are tightening their control over AI technologies.

Arpinder Singh

Arpinder Singh

Swapnil Sule

Swapnil Sule

India AI Governance Guidelines, unveiled by MeitY earlier this month, mark a defining moment for how India is to handle tech and trust. GoI has signalled confidence in the country's innovation ecosystem, letting AI grow while nudging organisations to bake in accountability, transparency and human oversight. For businesses and finance leaders, this is a call to start scaling AI responsibly with clear governance frameworks and measurable outcomes, and through a human-centric lens.

EY's September 2025 'Responsible AI Pulse' global survey of 975 C-suite leaders reveals that nearly every company polled has suffered financial losses from AI-related incidents, with average damages exceeding $4.4 mn. The writing is on the wall: AI regulation is the need of the hour.

When left unchecked, AI systems can introduce risks that are difficult to detect and even harder to mitigate. Across the globe, manipulated media, biased algorithms and hallucinated responses are surfacing in ways that challenge legal norms, ethical boundaries and trust. Governments are responding with urgency, but the pace of regulation is yet to catch up with the speed of AI adoption.


The EU AI Act, implemented from August 2024, uses a risk-based framework that categorises AI systems and levies heavy penalties for violations. Depending on the severity of the violation, companies can be fined as high as ₹40 mn, or up to 7% of their global turnover, conveying a strong message to prioritise AI compliance. The US has taken a multi-pronged approach, targeting algorithmic discrimination and consumer transparency. China, meanwhile, has implemented stringent content labelling and licensing requirements for generative AI models, emphasising information control.

India's AI governance framework is designed to be non-prescriptive, and aims to establish safety boundaries while allowing innovation to flourish. RBI's Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee Report endorses this adaptive strategy by proposing seven guiding principles that include fairness, accountability and explainability.

It recommends structured governance mechanisms such as board-approved AI policies, AI-incident reporting frameworks and risk-based audit protocols to create an AI ecosystem that is ethically aligned and legally compliant. Developed to harness AI's potential in the financial sector, these guidelines can help companies implement systems with human oversight, especially in other high-stakes sectors like healthcare, law enforcement and IT.

While it may take some years to achieve global synergy on regulations, it's imperative to agree on a set of universal non-negotiables to ensure AI innovation doesn't threaten organisational integrity. Businesses stand to suffer heavy financial, reputational and legal repercussions due to oversight errors, data leaks or biased decision-making. When AI is deployed in silos, without centralised oversight, it poses significant risks. Like any other third party, AI tools must be approached with caution and screened through a strict risk-assessment protocol that not only checks for negative incidents caused by the AI system but also scans for potential risks.

Encouragingly, 80% of the EY survey respondents said their organisation had defined a set of responsible AI principles. However, when it came to execution, only 67% had adopted controls, and 66% had established real-time monitoring to ensure adherence.

To truly harness the potential of AI while safeguarding against its risks, organisations must implement structured safety guard rails. Some key elements to include are risk assessments, accountability mechanisms, periodic testing of AI training data for biases, continuous monitoring of results to curb hallucinations, and AI-incident management mechanisms.

Most framework creators like ISO (International Organisation for Standardisation) and Nist (US National Institute of Standards and Technology) have also introduced AI-systems management frameworks. ISO 42001 provides guidelines for organisations to manage AI-related risks effectively by creating a robust framework to manage the AI ecosystem.

Getting certified and training the workforce in line with these principles can help inculcate a sense of responsibility when it comes to leveraging AI tools. This will help lay a strong foundation for building AI infrastructure, establishing good governance, and setting up risk-management protocols and compliance checkpoints to prevent regulatory and legal pitfalls.

As organisations increasingly adopt AI, those that lead with governance - embedding ethics, transparency and accountability - will not only innovate faster but more sustainably. This balance is critical. At the same time, it's important to remember that over- regulation could slow innovation, while under-regulation could erode trust.

Getting it right would mean enabling companies to innovate responsibly, embedding ethics and explainability by design. In the AI era, responsible innovation isn't just good practice, but also a competitive advantage when deeply rooted in governance.

(Singh is India and emerging markets leader, and Sule is director, EY)
(Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com.)


READ NEXT
Cancel OK