By Jonathan Davis
According to the script of the iconic 1980s futuristic thriller “The Terminator,” humans are nearly exterminated in the future after “SkyNet” computers “got smart” and saw Mankind, its creator, as its enemy.
Could such a scenario actually occur, though? More and more, scientists argue the answer is ‘yes.’
The theory amongst today’s scientists isn’t much different from the script of the original “Terminator,” and it revolves around the concept of artificial intelligence.
Using AI, or machine learning, systems that control nuclear weapons could someday ‘get smart enough’ to see threats to their own existence.
The Jerusalem Post reports:
While numerous AI experts have told the Jerusalem Post over the years that people worried about AI turning on humanity as in the famous “Terminator” movies simply misunderstand the technology, the likelihood of AI making a catastrophic mistake with nuclear weapons is no fairytale.
A recent article in the Bulletin of the Atomic Scientists, a top group of nuclear scientists, as well as other recent publications by defense experts have said that Russia may already be integrating AI into a new nuclear torpedo it is developing known as the Poseidon, to make it autonomous.
According to the Atomic Scientists report, the US and China are also considering injecting AI deeper into their nuclear weapons’ programs as they modernize and overhaul their nuclear inventory.There have been no express reports about Israel integrating AI into, what according to foreign reports, is an apparatus of between 80-200 nuclear weapons.
But there have been reports of the IDF integrating AI into conventional weapons, such as its spice bomb carried by F-16s.Part of the concern in the report was that integrating AI into nuclear weapons’ systems could become culturally inevitable once non-conventional weapons become more dominated by AI.
The nuclear holocaust risks that scientists and experts are writing about are not a hostile takeover by AI, but by AI getting hacked, slipping out of control by a technical error or badly misjudging a situation.
Such risks could be magnified by unmanned vehicles carrying nuclear weapons where there is no one on board and responsible for making the final decision to deploy a nuclear weapon.
As you can tell from this report, integrating AI into military systems is not just likely but is already happening. If there is one country that believes it can gain an advantage over a rival by placing its nuclear arsenal on AI standby, that, too, will happen.
Now imagine pairing hypersonic missiles with AI. If the whole thing goes haywire, then nuclear holocaust will occur in record time.
As for technology spoofing human beings into a real nuclear war, it nearly happened at least once already, the J-Post reported:
An example that the article gives of human judgment’s importance was a 1983 incident when a Soviet officer named Stanislav Petrov disregarded automated audible and visual warnings that US nuclear missiles were inbound.
The systems were wrong and had Petrov trusted technology over his own instincts, the world might have gone to nuclear war over a technological malfunction.
Hypersonics are the next-gen ballistic missile. AI very well could be the system that arms and launches them.