Goldman Sachs Equity on the AI Threat By Brian Simpson

I have been covering the possibility of an emerging threat to humanity by generative artificial intelligence, generally citing insiders who fear the direction that things are going. So far this has been individuals, but now Goldman Sachs equity research division has issued a study. It predicts that “AI will lead one-third of a billion layoffs (at least) in the US and Europe. Think of it as the robotization of the service sector.

Specifically, according to Jan Hatzius, "using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300 million full-time jobs to automation" as up to "two thirds of occupations could be partially automated by AI." That means the end of work for these people, as there will be fierce competition for any jobs not displaced by AI, as long as such jobs last.

 

The there is the existential threat, that many have discussed; their conclusion: The safety risks around AI are huge, and we think there is a more than 50/50 chance AI will wipe out all of humanity by the middle of the century.” While there is not much discussed in print, a number of YouTube sites are dealing with the idea of an AI super-virus. This could be a computer virus that infects all computers, other than the supposed AI which has reached the singularity of super-intelligence, and consciousness. Or, it could involve the AI devising a biological virus that could kill most, or all humanity.

 

This is definitely worth thinking about, as the consequences are as high as they could possibly be.

 

https://www.youtube.com/watch?v=egv7P8bRf9

 

https://archive.md/WhH56

 

In late March, no lesser mortal human beings than Goldman Sachs equity research took a deep dive into the implications of AI for the world (and beyond).

The magnum opus led readers through the evolution of AI.

... to the promise of AI, which now "increasingly outperforms human benchmarks"...

Their shocking conclusion: AI will lead one-third of a billion layoffs (at least) in the US and Europe. Think of it as the robotization of the service sector.

Specifically, according to Jan Hatzius, "using data on occupational tasks in both the US and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work. Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300 million full-time jobs to automation" as up to "two thirds of occupations could be partially automated by AI."

And while Goldman does everything it can to spin the data in a positive light, with the mass layoffs being offset by "a boom in labor productivity that significantly increases global output", with widespread AI adoption eventually driving a 7% or almost $7tn increase in annual global GDP over a 10-year period.

However, as great as $7 trillion sounds... we would have to survive it to enjoy it (and by 'we', we mean 'all of humanity') and that's where BCA Research's latest report takes up the story: ChatGPT And The Curse Of The Second Law.

Here's the punchline:

The safety risks around AI are huge, and we think there is a more than 50/50 chance AI will wipe out all of humanity by the middle of the century.

So how does BCA Research's team get there?

Most discussions of AI extrapolate linearly from what AI can do today to what it can do tomorrow. But AI’s progression is following an exponential curve, not a linear one, meaning that advances could come much faster than expected

AI Is Different

If AI follows the same trajectory as other major technological revolutions, we may not see major economy-wide productivity gains from AI until the 2030s or later.

That being said, there are reasons to think that AI's impact could come much sooner. A recent study by Erik Brynjolfsson and his co-authors revealed that productivity rose by 14% among customer service workers at a major software firm after they were given access to generative artificial intelligence tools.

The thing about those earlier technological revolutions is that they were focused on the application and dissemination of pre-existing knowledge. In contrast, the AI revolution has the potential to lead to the creation of new knowledge – knowledge generated by machines rather than humans.

To be sure, this has not happened yet. ChatGPT still functions as a glorified autocomplete feature, using its algorithm to add word after word, sentence after sentence, to a running dialogue from a massive library of text. Yet, even with this limited functionality, it has managed to show what a recent Microsoft research paper described as “sparks” of general artificial intelligence.

While the investment frenzy over AI has already begun, the economic impact of AI is not yet visible in the productivity data...l

 

The Last Thing Humans Ever Invent

The fact that AI minds are nothing like human minds is irrelevant. A plane is nothing like a bird. Yet, the former can still fly much faster than the latter.

If models such as ChatGPT ever reach the point where they can train themselves – much like Deepmind’s AlphaZero can train itself to master chess without ever being taught the rules of the game – then they will be able to recursively improve themselves at an astronomically fast rate. To some extent, this is already happening.

Imagine an intelligence that can evolve from an initial IQ of 1. It would take five doublings to reach an IQ of 32, which is still far too low to function in a modern technological world. But it would only take three more doublings to reach an IQ of 256, which is far above the IQ of any human who has ever lived. AI has been improving exponentially for many years, but it is only now that we have reached the point along the curve where it can surpass humans on a wide variety of cognitive tasks

This highlights a key shortcoming of most discussions of AI’s probable impact on society and the economy. They extrapolate linearly from what AI can do today to what it can do tomorrow. But AI’s progression is following an exponential curve, not a linear one, meaning that advances could come much faster than expected. In fact, AI’s progression will probably be hyperexponential, with the time between performance doublings shrinking from years to perhaps weeks or even hours. Just as the investment community and the broader public were blindsided by the exponential increase in cases during the early days of the pandemic, they will be blindsided by how quickly AI transforms the world around us.

A Phase Transition

It does not matter if you can run the 100-meter dash in 11 or 12 seconds. However, it does matter if you can run it in 9 seconds or 10 seconds, because the difference between the two times will determine if you get an Olympic gold medal or a participation ribbon. By the same token, water is just water if its temperature is 80 or 90 degrees Celsius. But when the temperature hits 100 degrees, a “phase transition” occurs: it becomes steam. Humanity may be on the brink of such a phase transition.

The human population barely grew until the advent of farming around 10,000 BC. Following the Agricultural Revolution, global population growth accelerated to about 2.5% per century. With the start of the Industrial Revolution, global population growth jumped 40-fold to 1% per year. As humanity finally exited the Malthusian trap, per capita income began to rise much more quickly than the population. Since 1800, global GDP has risen by 2.8% per year for a cumulative increase of 50,000%

If humanity survives the transition to superintelligent AI, the impact on growth could be comparable to what occurred first during the Agricultural Revolution, and later during the Industrial Revolution. Both revolutions experienced a 30-to-100-fold increase in GDP growth relative to the previous epoch.

New technologies and new industries will proliferate. Problems that once seemed intractable, such as how to stop and reverse aging, could be solved overnight. Before ChatGPT, it seemed unlikely that such a phase transition would begin anytime soon. Now, it is probable it will happen by the end of the decade.

BCA's Bottom Line: Unlike past technological revolutions, the impact of superintelligent AI could arrive quite quickly. It will usher in an era of unprecedented prosperity or turn us all into paper clips.

And that is where BCA's research takes a darker turn...

Open the Pod Bay Doors, Hal

Will we survive the transition to superintelligence? Unfortunately, the odds are not good. The main issue is centered on the so-called alignment problem – how to align our goals with the AI’s goals.

Every AI system needs to be given a goal to pursue, without which it would not know how to use its resources. In the case of ChatGPT, that goal is entered as a prompt by the user. With more elaborate AIs such as AutoGPT, the goals could be more open-ended.

The list of all conceivable goals that an AI can pursue is enormous, only a tiny subset of which most humans would ever want to see fulfilled. And even within that tiny subset, getting an AI to fulfill a goal in the way it was originally intended could prove to be exceedingly difficult.

The Curse of the Second Law

The laws of physics do not have a preferred direction of time. The reason we perceive the flow of time is because of the Second Law of Thermodynamics, which states that entropy almost always increases in the direction we call the future.

A system with high entropy has more possible arrangements than a system with low entropy. If you see a photo of a broken egg and a photo of the same unbroken egg, you can tell which photo was taken first because there are many more ways an egg can be broken than unbroken.

One of the consequences of the Second Law is that it is much easier to destroy than to create. Such destruction can come inadvertently, as in the example of the paper clip maximizer, or it might come intentionally. Either way, it might be difficult to avoid annihilation.

AI Safety Will Be a Huge Industry

By one estimate, humanity has wiped out 60% of vertebrate animals since 1970. We never purposely set out to kill them. It was just a by-product of economic expansion across the planet. The risk is that contact with a more intelligent AI could also usher in our extinction.

On March 22, the Future of Life Institute published a letter signed by more than 1,000 luminaries, including Elon Musk and Apple co-founder Steve Wozniak, arguing for a six-month pause in AI research to allow for more time to develop better safety protocols.

So far, the AI industry has been extremely cavalier about safety issues. That will likely change, as concern over the risks posed by AI continues to accumulate.

So, in summary, AI will lead to 300 million in job losses according to Goldman... and then it will wipe out all of humanity.

Maybe Musk is on to something with the 'pause'.”

 

 

 

 

 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Friday, 19 April 2024

Captcha Image