Top News

When AI chooses targets faster than humans can think, who owns consequences?
Samira Vishwas | February 6, 2026 7:25 PM CST

Artificial intelligenceIANS

In early 2026, a restricted defence review circulated among senior security principals in an Indo-Pacific capital acknowledged a reality militaries had avoided stating plainly: in specific operational envelopes, AI-enabled systems now detect, prioritise, and initiate lethal responses faster than human confirmation is operationally feasible. The disclosure was not tied to a crisis or a battlefield catastrophe. That is precisely why it matters. Across integrated air defence, counter-drone warfare, maritime surveillance, and cyber-kinetic fusion, the sensor-to-shooter cycle has collapsed from minutes to seconds and in certain architectures, to sub-second machine execution. NATO’s 2026 internal assessment bluntly noted that “human cognition has become the pacing vulnerability in contested battlespace decision loops.” This is not speculative futurism. It is a structural shift already embedded in doctrine, procurement, and operational design. Paradigm shifts in warfare rarely announce themselves with explosions; they reveal themselves quietly, after governance has already fallen behind capability.

How “Decision Support” Became De Facto Decision Authority

For nearly two decades, defence establishments reassured political leadership that artificial intelligence would remain strictly “decision-supportive,” never substituting human authority. By 2026, that assurance has collapsed under operational reality. Empirical studies across US, Israeli, and allied NATO forces show that under high-tempo conditions, human operators override AI-generated recommendations in fewer than 4–6 percent of engagements. This is not dereliction; it is neurological inevitability. Human cognition processes complex threat environments in hundreds of milliseconds. AI systems ingest, correlate, predict, and act in microseconds. When an algorithm presents fused sensor data, intent prediction, collateral estimation, and response options simultaneously, rejecting it requires not just courage but time which no longer exists. As former US Deputy Secretary of Defense Bob Work warned, “once speed becomes decisive, delegation becomes inevitable.” Authority migrates not through policy change, but through tempo. Humans increasingly ratify outcomes already determined by machines.

The Collapse of the OODA Loop Into Executable Code

John Boyd’s OODA loop—Observe, Orient, Decide, Act—once defined advantage by privileging faster human cognition and adaptive judgment. Artificial intelligence collapses this loop into continuous machine simultaneity. Observation is persistent through satellites, ISR drones, and electronic surveillance; orientation is algorithmic pattern recognition; decision is probabilistic inference; action is automated execution. What disappears is deliberation. By 2026, multiple advanced militaries openly concede that AI-enabled forces can “out-cycle” human-centric adversaries by orders of magnitude. RAND simulations indicate automation-driven tempo advantages can determine tactical outcomes within 48–72 hours of escalation. Yet compression breeds fragility. Errors propagate instantly. Bias scales lethally. A misclassified radar anomaly or corrupted data stream can cascade into kinetic response before escalation controls activate. The paradox is brutal: systems designed to reduce surprise may increase accidental war. Speed becomes power—and systemic vulnerability.

Divergent Doctrines, Unequal Strategic Risks

AI-enabled lethality is not pursued uniformly; it reflects political culture and strategic temperament. China integrates autonomy into deterrence doctrine, betting on swarm saturation, algorithmic intimidation, and decision-paralysis of adversaries. The United States emphasises AI-assisted command and control, constrained by legal review, congressional oversight, and civilian accountability. Israel, forged in continuous conflict, deploys battlefield-validated autonomy with ruthless pragmatism. Russia and Ukraine have transformed active warzones into live laboratories, accelerating AI-assisted targeting and drone swarms under combat stress. European powers remain cautious, prioritising governance over velocity. These doctrinal asymmetries matter. Automation is not neutral; it encodes values, risk tolerance, and political accountability. The future balance of power will not hinge on who possesses the most advanced algorithms, but on who integrates them most coherently into doctrine, law, command authority, and escalation control.

Accountability Without Intent: The Legal Vacuum

When an AI-enabled system selects a target that later proves unlawful, disproportionate, or escalatory, responsibility fractures. International humanitarian law presumes human intent, proportionality judgment, and conscious distinction between combatants and civilians. Algorithms possess none of these; they optimise mathematical objectives. A 2026 multinational survey of military legal advisers across 14 countries revealed no consensus on assigning culpability for autonomous lethal action. This ambiguity is destabilising. Diffused responsibility weakens deterrence against misuse and lowers political thresholds for force. Hannah Arendt warned of evil emerging through bureaucratic distance; AI introduces a more dangerous distance—statistical violence without intent. When no one clearly owns consequences, escalation becomes easier to rationalise. Accountability delayed becomes accountability denied, and in machine-speed warfare, delay is structurally guaranteed unless governance is redesigned at the architectural level.

Precision Is Not Ethics

Proponents argue that AI makes warfare more precise and therefore more humane. Precision, however, is not morality. AI systems reduce certain forms of collateral damage while introducing others that are less visible but no less corrosive. Pattern-of-life algorithms routinely misinterpret cultural behaviour. Training datasets embed historical bias. A 2025 Nature Machine Intelligence study demonstrated that target-classification accuracy drops by 20–30 percent when models are deployed outside their original geographic and behavioural contexts. More dangerously, precision lowers political inhibition. When leaders believe strikes are “surgically clean,” authorisation becomes easier and more frequent. Ethicists describe this as automation-enabled moral hazard. Philosopher Luciano Floridi captured the danger succinctly: “The cleaner the interface, the dirtier the consequences.” AI does not eliminate ethical complexity; it conceals it behind dashboards and confidence scores.

Strategic Stability Under Algorithmic Reaction Time

Cold War stability relied on time time to verify, deliberate, and de-escalate. AI erodes that buffer. Early-warning systems increasingly rely on machine learning to detect missile launches, cyber intrusions, and anomalous behaviour. While accuracy improves, false positives become more dangerous as response windows shrink. Strategic planners now quietly debate whether automated retaliation thresholds are drifting closer to operational reality. The danger is not rogue AI, but perfectly functioning AI responding to imperfect data. UN Secretary-General António Guterres warned that “machines must never be allowed to make life-and-death decisions.” Yet competitive pressure pushes states toward automation to avoid being outpaced. Strategic stability now rests on assumptions about adversaries’ algorithms an inherently brittle foundation for peace.

The Corporate–Military– Algorithmic Nexus

Unlike past military revolutions, AI’s intellectual core resides largely in private hands. Cloud providers, chip manufacturers, and model developers form an unspoken triad with defence establishments. By 2026, global defence AI expenditure exceeds USD 70–80 billion, much of it tied to proprietary systems. Corporate incentives reward performance metrics and market dominance, not ethical restraint. When models trained for advertising optimisation are repurposed for target prioritisation, value systems collide. A senior European defence official admitted privately, “we rent our strategic cognition.” This raises sovereignty questions. If proprietary algorithms shape lethal outcomes, can democratic states truly claim ownership of decisions? The battlefield is now as much a supply chain of code, compute, and contracts as physical terrain.

Human Judgment as Strategic Capital

As machines accelerate, human judgment becomes more, not less valuable. Judgment is not speed; it is wisdom under uncertainty. It integrates law, ethics, political context, and long-term consequence dimensions AI does not comprehend. Militaries that reduce humans to supervisors of automation risk hollowing out strategic culture. Empirical research shows prolonged exposure to algorithmic authority reduces dissent and critical challenge over time. Philosopher Yoram Hazony reminds us that “responsibility requires the capacity to say no.” In a machine-speed kill chain, saying no becomes structurally harder. Preserving meaningful human control is not nostalgia; it is strategic risk management. States that neglect this may win tactical engagements yet lose legitimacy and ultimately, control over escalation.

India’s Strategic Imperative in the AI Battlespace

For India, the AI-lethality debate is not abstract it is existential. Facing two nuclear-armed adversaries, persistent grey-zone conflict, and accelerating Chinese automation along the Line of Actual Control, India confronts a stark dilemma. Strategic restraint preserves legitimacy and democratic accountability; delay risks asymmetry in machine-speed warfare. India’s imperative is not imitation, but doctrinal innovation: integrating AI for surveillance, decision-support, and defensive automation while retaining firm human authority over lethal escalation. India’s strength lies in its institutional depth civil-military balance, judicial oversight, and strategic patience. Leveraging AI without surrendering judgment can become India’s differentiator. In a region racing toward automation, restraint backed by capability may prove more stabilising than speed without control. India’s choices will shape not only its security, but the ethical tone of the Indo-Pacific order.

The Failure of Global Governance

Despite years of debate, governance of autonomous weapons remains fragmented. UN processes generate principles without enforcement. Major powers resist binding constraints, fearing asymmetrical disadvantage. By 2026, more than 90 states support regulation in principle, yet none agree on definitions of autonomy, meaningful human control, or accountability mechanisms. This vacuum rewards speed over responsibility. History offers a warning: arms races without norms end badly. Chemical weapons were constrained not by technology, but by taboo reinforced through law. AI lacks such a taboo. Without shared red lines, escalation becomes algorithmically normalised. As one diplomat remarked in Geneva, “we are regulating yesterday’s weapons with yesterday’s language, while tomorrow’s wars write their own rules.”

Engineering Ethics Into the Kill Chain

The future demands a shift from ethical declarations to ethical architecture. Responsibility must be engineered, not appended. This includes auditable algorithms, traceable decision logs, human-interrupt mechanisms, and command chains that cannot be bypassed by speed. Some militaries now experiment with ethical latency deliberate pauses allowing human review in ambiguous scenarios. Critics call this inefficient; strategists call it stabilising. Transparency is equally critical. Black-box systems undermine trust internally and internationally. IEEE standards warn that “opacity is incompatible with accountability.” Designing AI that can explain, defer, or refuse action is not a moral luxury it is a strategic necessity in machine-speed warfare.

The Strategic Courage to Be Slower

The hardest strategic choice of the AI age may be the courage to slow lethal decision-making. Not slower innovation slower execution. Financial markets deploy circuit breakers to prevent flash crashes. Warfare has none, despite infinitely higher stakes. Strategic maturity in 2026 requires recognising that not every technical advantage should be exploited to its limit. Sun Tzu wrote that supreme excellence lies in subduing the enemy without fighting. In an AI-saturated battlespace, supreme leadership may lie in knowing when not to let machines act even when they can. This choice is politically difficult, strategically unfashionable, and morally essential.

Owning Consequences Before Machines Erase Them

AI already chooses targets faster than humans can think. The remaining question is whether humans will choose to own the consequences before machines render ownership meaningless. History will judge leaders not by the sophistication of their algorithms, but by whether they preserved responsibility in an age designed to dissolve it. The first AI-accelerated war will not be remembered for who fired first, but for who failed to slow the machine. The future is being coded now line by line, doctrine by doctrine. If speed outruns ethics and automation eclipses accountability, victory may come at the cost of agency itself. In 2026 and beyond, restraint is no longer weakness. It is the last strategic advantage humans still control.

(Major General Dr. Dilawar Singh, IAV, is a distinguished strategist having held senior positions in technology, defence, and corporate governance. He serves on global boards and advises on leadership, emerging technologies, and strategic affairs, with a focus on aligning India’s interests in the evolving global technological order.)


READ NEXT
Cancel OK