How giving AI robots control over nukes could start WWIII and ‘kill us all’

Giving artificial intelligence control of nuclear weapons could spark an apocalyptic conflict, a leading expert has warned.

As AI takes on a greater role in controlling devastating weapons, the chances of the technology making a mistake and triggering World War III increase.


These include the US B-21 nuclear bomber, China’s AI hypersonic missiles and Russia’s Poseidon nuclear drone.

Writing for the Bulletin of the Atomic Scientists, expert Zachary Kellenborn, a policy researcher at the Schar School of Policy and Government, warned: “If artificial intelligences controlled nuclear weapons, we could all be dead.

He continued, “The military is increasingly integrating autonomous functions into weapons systems,” adding that “there is no guarantee that some military will not hand over the responsibility of nuclear launches to AI.” .

Kellenborn, who describes himself as a “mad scientist” in the US military, explained that “error” is the biggest problem with autonomous nuclear weapons.

He said: “In the real world, data can be biased or incomplete in all sorts of ways.”

Kellenborn added, “In a nuclear weapons context, a government may have little data on adversary military platforms; existing data may be structurally biased, for example by relying on satellite imagery; or the data may not reflect expected obvious variations such as images taken on foggy, rainy or overcast days. »

Training a nuclear weapons AI program also poses a major challenge, as nuclear weapons have fortunately only been used twice in history in Hiroshima and Nagasaki, meaning any system would have hard to learn.

Despite these concerns, a number of military AI systems, including nuclear weapons, are already in place around the world.


Russia would have modernized its nuclear arsenal


Russia would have modernized its nuclear arsenalCredit: AFP

In recent years, Russia has also upgraded its so-called “Doomsday device”, known as the “Dead Hand”.

This last line of defense in a nuclear war would fire all Russian nuclear weapons at once, guaranteeing total destruction of the enemy.

First developed during the Cold War, it is believed to have undergone an AI upgrade in recent years.

In 2018, nuclear disarmament expert Dr Bruce Blair told Daily Star Online he believed the system, known as ‘Perimeter’, was ‘vulnerable to cyberattacks’ which could prove catastrophic .

Deadhand systems are intended to provide a backup in the event that a state’s nuclear command authority is killed or otherwise disrupted.

US military experts Adam Lowther and Curtis McGuffin argued in a 2019 article that the United States should consider “an automated strategic response system based on artificial intelligence”.


In May 2018, Vladimir Putin launched the Russian underwater nuclear drone, which experts say could trigger 300ft tsunamis.

The Poseidon nuclear drone, due for completion by 2027, is designed to obliterate enemy naval bases with two megatons of nuclear energy.

Described by US Navy documents as an “autonomous intercontinental nuclear-powered torpedo”, or an “autonomous underwater vehicle” by the Congressional Research Service, it is intended for use as a second-strike weapon in the event of nuclear attack. conflict.

The big unanswered question about Poseidon is; what can he do independently.

Kellenborn warns that he could potentially be allowed to attack autonomously under specific conditions.

He said: “For example, what if in a crisis scenario where the Russian leadership fears a possible nuclear attack, Poseidon torpedoes are launched in loiter mode? It could be that if the Poseidon loses the communications with its host submarine, it launches an attack.”

Announcing the launch at the time, Putin boasted that the weapon would have “virtually no vulnerability” and “nothing in the world will be able to withstand it”.

Experts warn its greatest threat would be triggering deadly tsunamis, which physicist Rex Richardson says could be equal to the Fukushima tsunami in 2011.


The United States has launched a $550 million remotely piloted bomber that can fire nuclear weapons and hide from enemy missiles.

In 2020, the US Air Force’s B-21 stealth aircraft was unveiled, the first new US bomber in over 30 years.

Not only can it be piloted remotely, it can also fly itself using artificial intelligence to select targets and avoid detection without human intervention.

Although the military insists that a human operator will always make the final decision on whether or not to hit a target, information about the plane has been slow to come out.


Last year, China boasted that its AI fighter pilots were “better than humans” and shot down their non-AI counterparts in simulated dogfights.

The Chinese military’s official PLA Daily newspaper quoted a pilot who claimed the technology learned the movements of his enemies and could defeat them a day later.

Chinese brigade commander Du Jianfeng said the AI ​​pilots also helped make human participants better pilots by strengthening their flying skills.

Last year, China claimed its AI-controlled hypersonic missiles could hit targets 10 times more accurately than a human-controlled missile.

Chinese military missile scientists, writing in the journal Systems Engineering and Electronics, have proposed using artificial intelligence to write the weapon’s software “on the fly”, meaning that human controllers would have no idea what would happen after pressing the launch button.


In 2021, Russia unveiled a new AI stealth fighter jet – while taking a dig at the Royal Navy.

The 1,500mph plane called Checkmate was launched at a Russian airshow by an elated Vladimir Putin.

An advert for the self-driving aircraft – which can hide from enemies – featured a photo of the Royal Navy’s HMS Defender in the jet’s sights with the caption: ‘See you soon’.

If Artificial Intelligences Controlled Nuclear Weapons, We Could All Be Dead

Zachary KellenbornNuclear Weapons Specialist

The world has already moved closer to a devastating nuclear war that has been prevented only by human intervention.

On September 27, 1983, Soviet soldier Stanislav Petrov was a duty officer at a secret command center south of Moscow when a chilling alarm went off.

He reported that the United States had launched intercontinental ballistic missiles carrying nuclear warheads.

Faced with an impossible choice – raise the alarm and potentially start World War III or bank on a false alarm – Petrov chose the latter.

He later said, “I categorically refused to be guilty of starting World War III.”

Kellenberg said Petrov made a human choice not to trust the automated launch detection system, explaining, “The computer was wrong; Petrov was right. The false signals came from the early warning system confusing the reflection of the sun over clouds with missiles.

“But if Petrov had been a machine, programmed to react automatically when confidence was high enough, that mistake would have triggered a nuclear war.”

He added: “There is no guarantee that some military will not give AI responsibility for nuclear launches; international law does not specify that there should always be a ‘Petrov’ guarding the button. C is something that should change, soon.”

Soviet Colonel Stanislaus Petrov prevented a possible nuclear war


Soviet Colonel Stanislaus Petrov prevented a possible nuclear warCredit: Alamy
An expert has warned that AI could trigger a nuclear apocalypse


An expert has warned that AI could trigger a nuclear apocalypseCredit: Getty – Contributor

Comments are closed.