As artificial intelligence becomes central to cyber defence, security leaders warn that automation is reducing workloads — but may also be quietly reshaping risk in ways organisations do not yet fully understand.
Across the UAE, companies are deploying AI to triage alerts, automate responses and ease pressure on overstretched security operations centres (SOCs). But while fewer alerts and faster closure rates can look like progress, experts caution that this efficiency may come at the cost of visibility.
“When an automated system fails, it often just stops catching what it should,” said Mohammad Ismail, Vice President for EMEA at Cequence Security. “There’s no obvious error — just a gap that widens until something breaks.”
Ismail said the real danger is not AI itself, but how success is measured. “Reduced workload feels like progress. It isn’t always,” he said, warning that organisations can start equating silence with safety rather than examining what systems may be missing. In a region facing “hundreds of thousands of cyberattacks daily,” he added, “the margin for missed detection is already razor-thin.”
According to UAE officials, the country is absorbing up to 800,000 cyberattacks a day, many now powered by AI-driven tools such as FraudGPT and WormGPT. Defence systems trained on historical patterns, Ismail said, “are not built to catch tomorrow’s AI-generated threat.”
A similar concern was raised by Uzair Gadit, CEO of Secure.com, who said automation can create “confidence without certainty.”
Uzair Gadit (left) and Mohammad Ismail.
“A team may see fewer alerts, faster closure rates, and cleaner dashboards, yet have less visibility into what the system filtered out, downgraded, or missed,” Gadit said. “In that case, the organisation is not more secure — it is simply less aware of its blind spots.”
Both executives stressed that AI works best when it supports human judgment rather than replacing it. Gadit described this as “guided automation,” where human decisions are embedded into systems through pre-approved workflows. “So the goal is not maximum automation,” he said. “It is visible automation, with human judgment captured, repeated, and scaled.”
Another growing concern is the rise of so-called “shadow AI” — unsanctioned tools adopted by employees to speed up everyday work. What looks like productivity, Ismail warned, may conceal serious governance gaps.
“None of these people has malicious intent,” he said. “But each one has potentially created a data flow, an API connection, or an access pathway that the security team never approved, never reviewed, and in most cases, can’t even see.”
Gadit said shadow AI is now “one of the biggest internal security risks many companies face,” because adoption is moving faster than governance. “Company data, customer information, code, and even operational decisions can start flowing through systems the business does not fully see or control,” he said.
With global cybersecurity skill shortages and rising alert fatigue, reliance on AI is likely to deepen. The challenge, both experts agree, is balancing speed with accountability.
“Human oversight isn’t a bottleneck,” Ismail said. “Poorly designed human oversight is.”
In the emerging AI-driven SOC, the question is no longer whether machines will take the lead — but whether humans remain firmly in control where it matters most.
-
Tech: Now chat with AI right from your car! Elon Musk's Grok arrives on Apple CarPlay..

-
Millions of Smartphone Users at Risk! Government Uncovers Major Flaw—Phones Could Be Hacked..

-
MBBS Education Loan India 2026: Will You Have to Pay EMIs During Your Studies? Understand the Rules for MBBS Loans..

-
Government Job Guide: How to Find a Government Job, and What is the Correct Way to Fill Out the Application Form?

-
Bihar BEd CET 2026: Applications Open for Bihar BEd Entrance Exam; Exam on June 7..
