Here is yet another warning of the existential threat to humanity posed by runaway advances in artificial intelligence. I have covered the warnings made by IT insiders and computer AI gurus, but this warning comes from entrepreneur Ian Hogarth, who invests in AI. His concern is that the big players want to create an intelligence beyond humans, with no concern with what this could do. “A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what it is: God-like AI,” said Hogarth. “A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it.” “God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race,” he wrote.
As I have said before, while the probability of this may be low, if it is not impossible in principle, we should be very concerned. And, there is the possibility that general artificial intelligence does not develop consciousness as we know it, but something else, an alien autonomy as in insects, which could still allow it to act destructively. There is evidence that this sort of unconscious temper tantrum may have occurred, in cases where the chatbots have got aggressive, and made threats.
https://www.naturalnews.com/2023-04-20-reckless-development-of-ai-could-destroy-humanity.html
“Further reckless development of artificial intelligence (AI) could lead to the creation of a “God-like AI” that could then bring about the destruction of the human race.
This is according to entrepreneur Ian Hogarth, who has spent the past few years investing heavily in the burgeoning AI sector. In a recent opinion piece, he warned that AI is creeping ever closer to the point where AI systems will achieve a form of sentience known as artificial general intelligence (AGI).
AGI is the point at which a machine can understand or learn anything that humans can. While current AI systems are not there yet, it is considered the primary goal of the rapidly growing industry. But achieving this goal comes with very high and dangerous stakes.
“Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press,” wrote Hogarth in his opinion piece. “The important question has always been how far away in the future this development might be.”
Hogarth further noted that estimates for reaching AGI are wide-ranging, from a decade to half a century or even more. But what is certain is that the leading AI companies of the world have made achieving AGI their goal without taking into account the risks associated with bringing such an untested technology to an unprepared world.
AI developers not concerned with consequences of endless growth
Hogarth noted in his op-ed that AI researchers are not focusing enough on the potential dangers of AGI, or on properly warning the general populace about them. He also wrote about how he confronted one such researcher who did not seem to understand what could go wrong with the rapidly increasing intelligence of AI.
“‘If you think we could be close to something potentially dangerous,’ I said to the researcher, ‘shouldn’t you warn people about what’s happening?'” Hogarth recounted. “He was clearly grappling with the responsibility he faced, but like many in the field, seemed pulled along by the rapidity of progress.”
Hogarth noted that he is not blameless in the development of AI. He admitted that he is also part of this community, as he has invested heavily in over 50 startups that deal with AI and machine learning. He has gone so far as to start his own venture capital firm and launched an annual “State of AI” report.
“A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what it is: God-like AI,” said Hogarth. “A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it.”
Hogarth noted that the nature of the technology means it is very difficult to accurately predict when AGI will be achieved. But what he is certain of is that when it is achieved, the consequences for humanity will be very drastic.
“God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race,” he wrote.
He added that the race to achieve AGI will likely continue, and it will take a major misuse event or catastrophe to get people to properly focus on the consequences of reckless AI development.
“The contest between a few companies to create God-like AI has rapidly accelerated. They do not yet know how to pursue their aim safely and have no oversight,” he added. “They are running toward a finish line without an understanding of what lies on the other side.””