“Time” Magazine: The End of Humanity By Brian Simpson

Time Magazine has devoted its cover story to the hot issue of whether AI will ultimately end humanity, via a type of Terminator scenario, where the robots take over, and kill us all. For your average conservative, this seems far fetched and fanciful, merely a product of Hollywood. But, there is some indications that we are on this track, unless AI is super-carefully controlled. “The US Air Force tested an AI enabled drone that was tasked to destroy specific targets. A human operator had the power to override the drone—and so the drone decided that the human operator was an obstacle to its mission—and attacked him.” So, there is a clear danger in putting too much power and control in the hands of AI systems, which can act in surprising ways. It would be the ultimate folly to hand over control of the nukes to them, with no final human say.

https://www.infowars.com/posts/skynet-has-arrived-us-air-force-drone-simulation-goes-awry-aircraft-kills-human-operator-destroys-communications-tower/  

https://thebulletin.org/2023/05/another-warning-from-industry-leaders-on-dangers-posed-by-ai/

https://time.com/6283609/artificial-intelligence-race-existential-threat/?utm_source=twitter&utm_medium=social&utm_campaign=editorial&utm_term=ideas_technology&linkId=217489123

“The window of what AI can’t do seems to be contracting week by week. Machines can now write elegant prose and useful codeace examsconjure exquisite art, and predict how proteins will fold.

Experts are scared. Last summer I surveyed more than 550 AI researchers, and nearly half of them thought that, if built, high-level machine intelligence would lead to impacts that had at least a 10% chance of being “extremely bad (e.g. human extinction).” On May 30, hundreds of AI scientists, along with the CEOs of top AI labs like OpenAI, DeepMind and Anthropic, signed a statement urging caution on AI: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Why think that? The simplest argument is that progress in AI could lead to the creation of superhumanly-smart artificial “people” with goals that conflict with humanity’s interests—and the ability to pursue them autonomously. Think of a species that is to homo sapiens what homo sapiens is to chimps.

Yet while many fear that AI could mean the end of humanity, some worry that if “we”—usually used to mean researchers in the West, or even researchers in a particular lab or company—don’t sprint forward, someone less responsible will. If a safer lab pauses, our future might be in the hands of a more reckless lab—for example, one in China that doesn’t try to avoid substantial risks.

This argument analogizes the AI situation to a classic arms race. Let’s say I want to beat you in a war. We both spend money to build more weapons, but without anyone gaining a relative advantage. In the end, we’ve spent a lot of money and gotten nowhere. It might seem crazy, but if one of us doesn’t race, we lose. We’re trapped.

But the AI situation is different in crucial ways. Notably, in the classic arms race, a party could always theoretically get ahead and win. But with AI, the winner may be advanced AI itself. This can make rushing the losing move.

Other game-changing factors in AI include: how much safety is bought by going slower; how much one party’s safety investments reduce the risk for everyone; whether coming second means a small loss or major disaster; how much the danger rises if additional parties pick up their speed; and how others respond.

The real game is more complex than simple models can suggest. In particular, if individual, uncoordinated incentives lead to the sort of perverse situation described by an “arms race,” the winning move, where possible, is to leave the game. And in the real world, we can coordinate our way out of such traps: we can talk to each other; we can make commitments and observe their adherence; we can lobby governments to regulate and make agreements.

With AI, the payoffs for a given player can be different from the payoffs for society as a whole. For most of us, it may not matter much if Meta beats Microsoft. But researchers and investors chasing fame and fortune might care much more. Talking about AI as an arms race strengthens the narrative that they need to pursue their interests. The rest of us should be wary of letting them be the ones to decide.”

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Wednesday, 15 May 2024

Captcha Image