When Will Artificial Intelligence Destroy Civilisation? By Brian Simpson

     Referenced below is the latest article by an AI expert to consider the issue of when a type of Terminator AI intelligence will arise, to enslave the human race. A real low probability event he argues, so let’s not worry about it. Those who do worry about it, he says are using a version of Pascal’s wager, that leads to contradictory results:
  https://www.technocracy.news/scientists-super-ai-might-emerge-like-coronavirus-to-destroy-civilization/
  https://www.technologyreview.com/s/615264/artificial-intelligence-destroy-civilization-canaries-robot-overlords-take-over-world-ai/

“Some theorists, like Bostrom, argue that we must nonetheless plan for very low-probability but high-consequence events as though they were inevitable. The consequences, they say, are so profound that our estimates of their likelihood aren’t important. This is a silly argument: it can be used to justify just about anything. It is a modern-day version of the argument by the 17th-century philosopher Blaise Pascal that it is worth acting as if a Christian God exists because otherwise you are at risk of an everlasting hell. He used the infinite cost of an error to argue that a particular course of action is “rational” even if it is based on a highly improbable premise. But arguments based on infinite costs can support contradictory beliefs. For instance, consider an anti-Christian God who promises everlasting hell for every Christian act.  That’s highly improbable as well; from a logical point of view, though, it is just as reasonable a wager as believing in the god of the Bible. This contradiction shows a flaw in arguments based on infinite costs. My catalogue of early warning signals, or canaries, is illustrative rather than comprehensive, but it shows how far we are from human-level AI. If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross. AI eschatology without empirical canaries is a distraction from addressing existing issues like how to regulate AI’s impact on employment or ensure that its use in criminal sentencing or credit scoring doesn’t discriminate against certain groups. As Andrew Ng, one of the world’s most prominent AI experts, has said, “Worrying about AI turning evil is a little bit like worrying about overpopulation on Mars.” Until the canaries start dying, he is entirely correct.”

     However, this gets Pascal’s argument wrong. It is true that the probability argument for believing in the existence of God is not a deductively conclusive one, but Pascal never said that it was. It was not Pascal’s intention of refuting every conceivable or logically possible alternative hypothesis, only to give some reason for choosing the God hypothesis over suspending belief. The idea of an evil demon ruler would have to be considered on the basis of its own arguments. Likewise, the case of the AI takeover scenario is that there is a small probability of its truth, and an infinite cost. This differs from the theological example where bare logically consisted hypotheses are considered. The AI takeover needs to be considered against say the happy face computer utopia, and evaluated. My feeling is that the probability of AI disaster is greater that the probability of everything turning out alright in the wash.
  https://plato.stanford.edu/entries/pascal-wager/ 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Sunday, 22 December 2024

Captcha Image