Hey, That’s a Great Idea: Just Let AI Control All Nuclear Weapons! By James “Dr Strangelove” Reed

     Here is something from the “great ideas” department, to let AI control the world’s nuclear weapons. I mean to say, it is bad enough having humans control them, but who is to say what will happen when humans are taken out of the loop?
  https://thebulletin.org/2019/08/strangelove-redux-us-experts-propose-having-ai-control-nuclear-weapons/

“Hypersonic missiles, stealthy cruise missiles, and weaponized artificial intelligence have so reduced the amount of time that decision makers in the United States would theoretically have to respond to a nuclear attack that, two military experts say, it’s time for a new US nuclear command, control, and communications system. Their solution? Give artificial intelligence control over the launch button. In an article in War on the Rocks titled, ominously, “America Needs a ‘Dead Hand,’” US deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union’s nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, “[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

In case handing over the control of nuclear weapons to HAL 9000 sounds risky, the authors also put forward a few other solutions to the nuclear time-pressure problem: Bolster the United States’ ability to respond to a nuclear attack after the fact, that is, ensure a so-called second-strike capability; adopt a willingness to pre-emptively attack other countries based on warnings that they are preparing to attack the United States; or destabilize the country’s adversaries by fielding nukes near their borders, the idea here being that such a move would bring countries to the arms control negotiating table. Still, the authors clearly appear to favor an artificial intelligence-based solution. “Nuclear deterrence creates stability and depends on an adversary’s perception that it cannot destroy the United States with a surprise attack, prevent a guaranteed retaliatory strike, or prevent the United States from effectively commanding and controlling its nuclear forces,” they write. “That perception begins with an assured ability to detect, decide, and direct a second strike. In this area, the balance is shifting away from the United States.” History is replete with instances in which it seems, in retrospect, that nuclear war could have started were it not for some flesh-and-blood human refusing to begin Armageddon. Perhaps the most famous such hero was Stanislav Petrov, a Soviet lieutenant colonel, who was the officer on duty in charge of the Soviet Union’s missile-launch detection system when it registered five inbound missiles on Sept. 26, 1983. Petrov decided the signal was in error and reported it as a false alarm. It was. Whether an artificial intelligence would have reached the same decision is, at the least, uncertain.”

     One of the major problems of throwing control to AI systems is that the algorithms for nuclear defence are based solely upon simulations rather than real world data of the parameters encountered in attacks, so there is a problem, a big problem of garbage in, garbage out. In this case, with global nuclear war, one of the garbage in problems, we are all likely to end up, garbage out!

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Sunday, 01 December 2024

Captcha Image