Top News

Top Scientific Advisor Urges Govt To Build An AI Risk Registry
Samira Vishwas | January 25, 2026 10:24 PM CST

SUMMARY

In a white paper, the office of the PSA said that the proposed database would record issues to safety failures, biased outcomes, security breaches and misuse of AI

The report also proposed a ‘techno-legal’ framework to balance innovation and risk, integrating legal safeguards, technical controls and institutional mechanisms for AI development

The report also recommended that the Centre should avoid a standalone AI law for now, and instead opt for sectoral guidelines and targeted amendments

The office of the principal scientific advisor (PSA) to the Union government has recommended the creation of a national database to record, classify, and analyse AI- risks and incidents across the country.

In a white paper titled “Strengthening AI Governance Through Techno-Legal Framework, the PSA said that the proposed database would record issues to safety failures, biased outcomes, security breaches and misuse of AI.

The report also proposed a ‘techno-legal’ framework to balance innovation and risk, integrating legal safeguards, technical controls and institutional mechanisms to ensure trusted AI development.

The report added that the proposed database would enable post-deployment monitoring and accountability of AI platforms through measures such as:

  • India-specific risk taxonomy
  • Detection of systemic trends and emerging threats
  • Data-driven audits and targeted regulatory interventions
  • Evidence-based refinement of technical and legal controls

The advisor added that the reports of incidents should be submitted by public bodies, private entities, researchers, and civil society organisations. The AI report also noted that the recommended database should draw on global best practices, but should be adapted to fit India’s sectoral realities and governance structures.

Besides, the advisor also recommended the creation of an AI Governance Group (AIGG), chaired by the PSA, to align coordination between various ministries and regulators. It also proposes to promote responsible AI innovation, “beneficial” deployment of AI in key sectors, studying emerging AI risks, identifying regulatory gaps and recommending necessary legal amendments.

The report also pitched the creation of a dedicated tech and policy expert committee (TPEC) under the IT ministry (MeitY) to supply multidisciplinary expertise in areas such as law, public policy, machine learning, AI safety, cybersecurity,  and others.

Finally, the white paper has also called for the formation of an AI Safety Institute to evaluate high-risk systems, develop safety tools, enable capacity building and training, and global engagement.

The Finer Print

The report also recommended that the Centre should avoid a standalone AI law for now, adding that the government should rather fix existing gaps through sectoral guidelines and targeted amendments. It also called on the regulators to pivot from a “command-and-control” enforcement to a techno-legal model, where legal duties are encoded into system design and technical controls.

The white papers also called on the Centre to anchor AI governance on parameters such as privacy, security, safety and fairness. It also emphasised transparency, accountability, explainability, provability, and an “enabling” posture to unlock innovation in the sector.

The PSA’s recommendations also include scaling up enforcement through standardised and automated checks, building measurable accountability via logs, attestations and audit trails. It also bats for utilising the country’s digital public infrastructure (DPIs) to lower compliance costs for “smaller” ventures.

It also suggests end-to-end lifecycle controls for AI platforms, from data collection and usage to model assessment and kill switches for agentic AI. The white paper also proposes mandatory disclosures, human oversight, grievance redressal and compulsory audits for AI platforms.

Terming deepfakes as a systemic risk, the report called for a techno-legal approach built around content provenance – mandatory disclosure, persistent identifiers, and cryptographic metadata at the point of generation and distribution.

It also called for building infrastructure obligations such as usage logging, repeat-offender detection, and coordinated incident reporting across platforms to curb synthetic content.

The white paper also called for shoring up India-specific evaluation of AI platforms, noting that Western benchmarks did not reflect local languages, accents and skin tones.

The AI report comes as the government prepares to host the India AI Impact Summit 2026 in New Delhi next month. The event is reportedly expected to see participation from global AI leaders, including Nvidia CEO Jensen Huang, OpenAI chief executive Sam Altman, Google CEO Sundar Pichai and Anthropic cofounder Dario Amodei.

Sources told TechCrunch that Altman plans to meet key tech executives, founders and government officials during the trip. The AI giant is also said to be looking at the country as a potential base for infrastructure expansion.


READ NEXT
Cancel OK