Cyber insurers in India are asking companies detailed questions about their use of artificial intelligence as a condition for coverage, signalling a shift in how the industry evaluates technology risk.
The new queries go beyond standard cybersecurity checklists. Underwriters now seek details on the types of AI companies deploy, how data flows through those systems, who can access AI models and whether firms have processes to detect and fix unusual outputs. Insurers are also examining deployment safeguards and how quickly companies can respond if an AI system fails.
“Insurers are increasingly asking organisations about their use of AI,” said S Vishwanathan, head of underwriting and reinsurance at SBI General Insurance. “These questions help insurers understand new types of threats and ensure policies keep up with how AI is changing the risk landscape.”
Until recently, underwriting questionnaires were based on established frameworks such as ISO 27001 and the NIST Cybersecurity Framework, neither of which addressed AI-specific risks. That is beginning to change. Insurers are issuing additional questionnaires focused on AI-assisted coding tools, data handling in AI systems and governance controls around generative AI.
“Disclosures given to insurers for cyber risk underwriting have traditionally relied on standardised questionnaires linked to frameworks such as ISO 27001 and NIST,” said Tanuj Gulani, president of Prudent Insurance Brokers. “These questionnaires historically did not include AI-specific factors, which meant that risks related to AI development tools were often underreported.”
At Marsh India, AI-related underwriting questions typically arise when a client reports AI-linked revenue or operational exposure. Once flagged, insurers seek detailed information on governance, risk ownership, bias detection, API security, model validation, data protection and audit logging.
“Large corporations, especially those in the tech and consulting sectors, have gradually begun to disclose AI coding tools and generative AI platforms used in their development processes,” said Ritesh Thosani, cyber practice leader at Marsh India.
The shift also has financial implications. Companies using AI-generated code without adequate safeguards may face higher premiums or stricter coverage terms. Companies with stronger AI safeguards are likely to secure lower premiums and broader coverage.
Thosani said insurers lack sufficient loss data to price AI risk accurately because the widespread use of generative AI began only in late 2022. Privacy breaches remain the biggest concern, as AI systems process large volumes of sensitive data. Business interruptions from AI outages, errors caused by hallucinations and supply-chain risks from third-party AI platforms are also drawing closer scrutiny.
Underwriters are particularly worried about systemic risk—a scenario in which a flaw in a widely used AI model affects multiple companies simultaneously.
“If a widely used AI system has a flaw, multiple companies could be impacted at once,” said Vishwanathan. “This could lead to cascading cyber incidents.”
He added that insurers are urging companies to diversify their AI platforms, conduct independent security audits and maintain robust incident-response plans.
Boards are also facing tougher internal questions. Gulani said directors now want to know where AI training data originates, how it is verified and what accountability mechanisms exist when AI outputs lead to errors or financial losses.
Voluntary disclosure by companies without prompting from insurers remains uneven. But as AI becomes more embedded in business operations, insurers, brokers and risk managers increasingly see it as a risk that cannot be ignored.
The new queries go beyond standard cybersecurity checklists. Underwriters now seek details on the types of AI companies deploy, how data flows through those systems, who can access AI models and whether firms have processes to detect and fix unusual outputs. Insurers are also examining deployment safeguards and how quickly companies can respond if an AI system fails.
“Insurers are increasingly asking organisations about their use of AI,” said S Vishwanathan, head of underwriting and reinsurance at SBI General Insurance. “These questions help insurers understand new types of threats and ensure policies keep up with how AI is changing the risk landscape.”
Until recently, underwriting questionnaires were based on established frameworks such as ISO 27001 and the NIST Cybersecurity Framework, neither of which addressed AI-specific risks. That is beginning to change. Insurers are issuing additional questionnaires focused on AI-assisted coding tools, data handling in AI systems and governance controls around generative AI.
“Disclosures given to insurers for cyber risk underwriting have traditionally relied on standardised questionnaires linked to frameworks such as ISO 27001 and NIST,” said Tanuj Gulani, president of Prudent Insurance Brokers. “These questionnaires historically did not include AI-specific factors, which meant that risks related to AI development tools were often underreported.”
At Marsh India, AI-related underwriting questions typically arise when a client reports AI-linked revenue or operational exposure. Once flagged, insurers seek detailed information on governance, risk ownership, bias detection, API security, model validation, data protection and audit logging.
“Large corporations, especially those in the tech and consulting sectors, have gradually begun to disclose AI coding tools and generative AI platforms used in their development processes,” said Ritesh Thosani, cyber practice leader at Marsh India.
The shift also has financial implications. Companies using AI-generated code without adequate safeguards may face higher premiums or stricter coverage terms. Companies with stronger AI safeguards are likely to secure lower premiums and broader coverage.
Thosani said insurers lack sufficient loss data to price AI risk accurately because the widespread use of generative AI began only in late 2022. Privacy breaches remain the biggest concern, as AI systems process large volumes of sensitive data. Business interruptions from AI outages, errors caused by hallucinations and supply-chain risks from third-party AI platforms are also drawing closer scrutiny.
Underwriters are particularly worried about systemic risk—a scenario in which a flaw in a widely used AI model affects multiple companies simultaneously.
“If a widely used AI system has a flaw, multiple companies could be impacted at once,” said Vishwanathan. “This could lead to cascading cyber incidents.”
He added that insurers are urging companies to diversify their AI platforms, conduct independent security audits and maintain robust incident-response plans.
Boards are also facing tougher internal questions. Gulani said directors now want to know where AI training data originates, how it is verified and what accountability mechanisms exist when AI outputs lead to errors or financial losses.
Voluntary disclosure by companies without prompting from insurers remains uneven. But as AI becomes more embedded in business operations, insurers, brokers and risk managers increasingly see it as a risk that cannot be ignored.




