Google blocks world's first AI-created zero-day cyberattack
12 May 2026
Google has successfully thwarted a zero-day exploit that was developed using artificial intelligence (AI), marking the first time the tech giant has thwarted an attack of this nature.
The exploit, which was aimed at an unnamed open-source web-based system administration tool, could have allowed cybercriminals to bypass two-factor authentication (2FA) protections.
The threat was discovered by the company's Threat Intelligence Group (GTIG), which found evidence of a "mass exploitation event."
Exploit was discovered in a Python script
AI involvement
The exploit was discovered in a Python script, which showed signs of AI assistance.
These included a "hallucinated CVSS score" and "structured, textbook" formatting typical of large language model (LLM) training data.
The attack exploited a high-level semantic logic flaw where the developer hardcoded a trust assumption into the platform's 2FA system.
First AI-assisted attack
Historic discovery
This is the first time Google has found evidence of an attack assisted by AI. However, the company's researchers have clarified that they "do not believe Gemini was used" in this case.
While Google was able to "disrupt" this particular exploit, it warns that hackers are increasingly using AI to discover and exploit security vulnerabilities.
AI as a target for attackers
Dual threat
The GTIG report also highlights that AI is not just a tool but also a target for attackers.
The report states, "GTIG has observed adversaries increasingly target the integrated components that grant AI systems their utility, such as autonomous skills and third-party data connectors."
This shows how cybercriminals are now using AI to find security vulnerabilities in other systems.
Advanced tactics employed by hackers
Advanced strategies
The report also sheds light on some advanced strategies employed by hackers.
These include "persona-driven jailbreaking" where they get AI to find security vulnerabilities for them, and feeding whole repositories of vulnerability data into AI models.
Cybercriminals are now training AI models on massive vulnerability datasets and leveraging OpenClaw in ways that suggest "an interest in refining AI-generated payloads within controlled settings to increase exploit reliability prior to deployment."
-
Did You Know? Now, Instead of Form 16, You Will Receive Form 130! How Much Has Income Tax Changed—From PAN to TDS?

-
Plug Pins: Why do phone chargers have 2 pins, but laptops have 3? 90% of people don't know the secret behind the third pin..

-
Your Phone Will Now Operate Based on Your Habits—This Feature Is Coming to Android

-
Kerala CM-Designate VD Satheesan Meets Pinarayi Vijayan Amid Transition Of Power

-
DK Shivakumar Claims SIR Exercise Meant To ‘Cut’ SC, BC, Minority Votes In Karnataka
