A nuclear war started by AI sounds like sci-fiction. It isn’t

March 21, 2025
We are ignoring a spectre on the horizon. It is the spectre of a global nuclear war triggered by artificial intelligence. UN Secretary General Antonio Guterres has warned of it. But so far nuclear-weapons states have avoided talks on this cataclysmic threat.
They argue that there is an informal consensus among the five biggest nuclear powers on the “human in the loop” principle. None of the five say they deploy AI in their nuclear-launch command systems. This is true but misleading.
They use AI for threat detection and target selection. AI-powered systems analyse vast amounts of data from sensors, satellites and radars in real time, analyse incoming missile attacks and recommend options for response. The human operators then cross-check the threat from different sources and decide whether to intercept the enemy missiles or launch retaliatory attacks. Currently, the response time available for human operators is 10 to 15 minutes. By 2030, it will be reduced to between five and seven minutes. Even though human decision-makers will make the final call, they will be swayed by the AI’s predictive analytics and prescriptions. AI may be the driving force behind launch decisions as early as the 2030s.
The problem is that AI is prone to errors. Threat-detection algorithms can indicate a missile attack where none exists. It could be due to a computer mistake, cyber intrusion or environmental factors that obscure the signals. Unless human operators can confirm the false alarm from other sources within two to three minutes, they may activate retaliatory strikes. The use of AI in many civilian functions such as crime prediction, facial recognition and cancer prognosis is known to have an error margin of 10 per cent. In nuclear early-warning systems, it could be around 5 per cent. As the precision of image-recognition algorithms improves over the next decade, this margin of error may decline to 1-2 per cent. But even a 1 per cent error margin could initiate a global nuclear war.
The risk will increase in the next two to three years as new agentic malware emerges, capable of worming its way past threat-detection systems. This malware will adapt to avoid detection, autonomously identify targets and automatically compromise them.
There were several close calls during the Cold War. In 1983, a Soviet satellite mistakenly detected five missiles launched by the United States. Stanislaw Petrov, an officer at Sepukhov-15 command centre, concluded that it was a false alarm and did not alert his superiors who could launch a counter-attack. In 1995, Olenegorsk radar station detected a missile attack off Norway’s coast. Russia’s strategic forces were placed on high alert and President Boris Yeltsin was handed the nuclear briefcase. He suspected it was a mistake and did not press the button. It turned out to be a scientific rocket. If AI had been used for determining the response in both situations, the outcome could have been disastrous.
Currently, hypersonic missiles use conventional automation rather than AI. They can travel at speeds of Mach 5 to Mach 25, evade radar detection and manoeuvre their flight path. Major powers are planning to enhance hypersonic missiles with AI to locate and instantly destroy moving targets, shifting the kill decision from humans to machines.
There is also a race to develop artificial general intelligence, which could lead to AI models operating beyond human control. Once this happens, AI systems will learn to augment and replicate themselves, taking over decision-making processes. When such an AI is integrated into decision-support systems for nuclear weapons, machines will be able to initiate devastating wars.
Humans have perhaps five to 10 years before algorithms and plutonium could reduce us to skeletons and skulls. We need a comprehensive agreement among major powers to mitigate this risk, going beyond the reiteration of the “human in the loop” slogan. This agreement must include transparency, explainability and cooperation measures; international standards for testing and evaluation; crisis-communication channels; national-oversight committees; and rules to prohibit aggressive AI models capable of bypassing human operators.
Geopolitical shifts have created an unexpected opportunity for such a treaty. Leading AI experts from China and the United States were involved in several track-two dialogues on AI risks, resulting in a joint statement by former US president Joe Biden and Chinese President Xi Jinping in November.
Elon Musk is a staunch advocate for the need to save humanity from the existential risks posed by AI. He may urge current President Donald Trump to transform the Biden-Xi joint statement into a pact. It would require Russia to get on board. Until January of this year, Russia had refused to discuss any nuclear-risk reduction measures, including the convergence with AI, unless the Ukraine issue was on the table. With Trump engaging Russian President Vladimir Putin in dialogue aimed at improving bilateral relations and ending the Ukraine war, Russia may now be open to discussions.
The question is who will bell the cat. China may be able to initiate trilateral negotiations. Neutral states including Turkey and Saudi Arabia could pave the way. This is a historic opportunity to make a breakthrough and save humanity from extinction. We must not let it go to waste for lack of political vision, courage and statesmanship.
The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect those of the Asia Research Institute, National University of Singapore.
Latest
Ports, politics, and peace: The engineering of stability
by Guru Madhavan
It's Time for Europe to Do the Unthinkable
Kishore Mahbubani
Can Young Americans and Chinese Build Bridges Over Troubled Waters?
Brian Wong
Trump 2:0: Getting US-China ties right despite the odds
Zhiqun Zhu
How Malaysia can boost Asean agency and centrality amid global challenges
Elina Noor
Why India-Pakistan relations need a new era of engagement
Farhan Hanif Siddiqi
Reconceptualizing Asia's Security Challenges
Jean Dong
Asia should take the Lead on Global Health
K. Srinath Reddy and Priya Balasubramaniam
Rabindranath Tagore: A Man for a New Asian Future
Archishman Raju
Securing China-US Relations within the Wider Asia-Pacific
Sourabh Gupta
Biden-Xi summit: A positive step in managing complex US-China ties
Chan Heng Chee
Singapore's Role as Neutral Interpreter of China to the West
Walter Woon
The US, China, and the Philippines in Between
Andrea Chloe Wong
Crisis Management in Asia: A Middle Power Imperative
Brendan Taylor
America can't stop China's rise
Tony Chan, Ben Harburg, and Kishore Mahbubani
Civilisational Futures and the Role of Southeast Asia
Tim Winter
US-China rivalry will be stern test for Vietnam's diplomatic juggle
Nguyen Cong Tung
Coexistence: The only realistic path to peace
Stephen M. Walt
Cyclone Mocha in conflict-ridden Myanmar is another warning to take climate security seriously
Sarang Shidore
Doubts about AUKUS
Hugh White
Averting the Grandest Collision of all time
Graham Allison
India Can Still Be a Bridge to the Global South
Sanjaya Baru
U.S.-China Trade and Investment Cooperation Amid Great Power Rivalry
Yuhan Zhang
Managing expectations: Indonesia navigating its international roles
Shafiah F. Muhibat
Caught in the middle? Not necessarily Non-alignment could help Southeast Asian regional integration
Xue Gong
It’s Dangerous Salami Slicing on the Taiwan Issue
Richard W. Hu
Navigating Troubled Waters: Ideas for managing tensions in the Taiwan Strait
Ryan Hass
The EU and ASEAN: Partners to Manage Great Power Rivalry?
Tan York Chor
Countering Moro Youth Extremism in the Philippines
Joseph Franco
India-China relations: Getting Beyond the Military Stalemate
C. Raja Mohan
America Needs an Economic Peace Strategy for Asia
Van Jackson
India-Pakistan: Peace by Pieces
Kanti Bajpai
HADR as a Diplomatic Tool in Southeast Asia-China Relations amid Changing Security Dynamics
Lina Gong
Technocratic Deliberation and Asian Peace
Parag Khanna
Safer Together: Why South and Southeast Asia Must Cooperate to Prevent a New Cold War in Asia
Sarang Shidore
Asia, say no to Nato: The Pacific has no need of the destructive militaristic culture of the Atlantic alliance
Kishore Mahbubani
Can Biden bring peace to Southeast Asia?
Dino Djalal
An India-Pakistan ceasefire that can stick
Ameya Kilara
An antidote against narrow nationalism? Why regional history matters
Farish A Noor
Can South Asia put India-Pakistan hostilities behind to unite for greater good?
Ramesh Thakur
Nuclear Deterrence 3.0
Rakesh Sood
The Biden era: challenges and opportunities for Southeast Asia
Michael Vatikiotis