I have been writing for someyears at this blog about the threats posed by runaway AI to human jobs, and the human essence re transhumanism. In particular I have been concerned about the lack of worry by conservatives about these issues, where they in the main say that there is not a threat. A nice article by philosopher Noah Carl addresses all of these issues, in more detail than I have done, and he reaches the same conclusion. By way of summary, here are some alarming in-roads AI has made into the human domain, and I use AI to summarise what it has got up to, from the horse's mouth:
Economic Games: Research by Qiaozhu Mei and colleagues found that AI's behavior in games like the Dictator Game and the Prisoner's Dilemma was indistinguishable from that of average humans.
Human Interaction: A study by Cameron Jones and Benjamin Bergen revealed that participants mistook AI for human interlocutors 54% of the time during five-minute conversations, compared to 67% for actual humans.
Academic Performance: Peter Scarfe and his team discovered that AI-generated answers on a psychology exam were undetected 94% of the time and received higher grades than human-written responses.
Medical Advice: John Ayers and colleagues found that evaluators preferred AI-generated medical responses over those from human physicians in 79% of cases, rating them as more informative and empathetic.
Creativity: Erik Guzik's research indicated that AI scored within the top 7% for flexibility and the top 1% for originality and fluency on the Torrance Tests of Creative Thinking.
Product Ideation: Karan Girotra's study showed that out of 40 top product ideas, 35 were generated by AI, outperforming human participants.
Poetry Composition: Brian Porter and Edouard Machery found that non-expert judges rated AI-written poems as more rhythmic and beautiful than those by well-known human poets.
This should surely give humans pause for concern. And the famous Turing test proposed by British mathematician and computer scientist Alan Turing in 1950, is a measure of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. The test involves a human evaluator interacting with a machine and another human through text-based communication. If the evaluator cannot reliably distinguish the machine from the human, the machine is said to have "passed" the Turing Test. It seems now that present AI has passed, or soon will pass this test. And that is concerning at least.
https://www.aporiamagazine.com/p/yes-youre-going-to-be-replacedl
"When the word "cope" started to catch on a few years ago, I initially opposed its use. As far as I could see, it had come to mean nothing more than "to try and justify a position that someone else disagrees with". Person A would say something like, "Here's why I believe such-and-such." And person B would chime in with, "Oh yeah, that's why you believe it? That's cope." However, I have changed my mind. "Cope" is a useful word, even if it does embody the kind of irreverence that is only too common in internet discourse. It basically means, "to try and justify a position that you really should have abandoned".
Which brings me to my point. There is an immense amount of cope about AI, especially from conservatives. This cope comes in two forms. First, there is the claim that AI isn't really very impressive and can't really do very much. Second, there is the claim that while AI is quite impressive and can do quite a lot, its effects on society will be largely or wholly positive.
The first form of cope is easy to expose, as a brief trawl of the academic literature and few germane examples will illuminate.
Qiaozhu Mei and colleagues prompted AI to play economic games such as the Dictator Game, the Ultimatum Game and the Prisoner's Dilemma, and then compared its behaviour to that of humans from a large international sample. They found that the best-performing AI behaved in a way that was indistinguishable from the average human.
Cameron Jones and Benjamin Bergen invited human participants to have a five minute conversation with either a human or an AI, and then asked them whether they thought their interlocutor was human. The best-performing AI was judged to be human 54% of the time, whereas humans were judged to be human only 67% of the time. (The worst-performing AI, an obsolete system, was judged to be human 22% of the time.)
Peter Scarfe and colleagues submitted AI-written answers to an online exam for a psychology course at a major British university. They found that 94% of the AI-written answers went undetected, and that the AI-written answers were awarded grades half a grade-boundary higher than those written by human students.
John Ayers and colleagues identified 195 exchanges where a verified physician responded to a public question online. They then posed the same questions to AI. Responses were evaluated by a team of physicians who were blind to their source (human versus AI). Evaluators preferred the AI responses in 79% of cases, rating them as more informative and more empathetic.
Erik Guzik and colleagues administered the Torrance Tests of Creative Thinking to AI and compared its performance with that of humans from several US samples. They found that AI scored within the top 7% for flexibility and the top 1% for both originality and fluency, as judged by humans who were not aware of the study's purpose.
Karan Girotra and colleagues asked Wharton MBA students to "generate an idea for a new product or service appealing to college students that could be made available for $50 or less", and randomly selected two hundred ideas. They then compared these with two hundred ideas generated by AI. Of the 40 best ideas, as judged by humans who were not aware of the study's purpose, 35 were generated by AI and only 5 were generated by humans.
Brian Porter and Edouard Machery asked humans who were not experts in poetry to judge poems written by AI or well-known human poets. The humans were unable to reliably distinguish AI-written from human poetry, and rated the AI-written poems as more rhythmic and more beautiful.
Lauren Martin and colleagues had AI review legal contracts and compared its performance to that of human lawyers from the US and New Zealand. They found that the best-performing AI matched human performance in terms of accuracy, and far exceeded human performance in terms of both speed and cost efficiency.
An AI was recently able to solve 25% of frontier math problems, which typically take humans specialists hours or even days to solve. Only a month earlier, legendary mathematician Terrence Tao had opined that the problems would "resist AIs for several years at least" because relevant training data is "almost non-existent". The same AI achieved a competitive coding score at the 99th percentile of human coders.
Moving from the academic literature to the world of creative arts, Sean Thomas penned a sobering article for the Spectator titled 'The person who edited this will soon be redundant'. Thomas, who is the author of several bestselling books, asked Gemini for feedback on his latest, unpublished novel (which was therefore not in the AI's training data). After only 20 seconds, it served up a critique that was so good it left him "slack jawed and dumbfounded". He writes:
What makes this critique particularly striking to me is its similarity to the critique my professional editor has already given me, down to minor details. And where it differs, Gemini is arguably a tad better – sharper and clearer. And my real human editor is absolutely brilliant, one of the best in the business, renowned in the industry.
More recently, the veteran screenwriter Paul Schrader, known for movies such as Taxi Driver, Raging Bull and First Reformed, posted some sobering comments on his Facebook page:
I've just come to realize AI is smarter than I am. Has better ideas, has more efficient ways to execute them. This is an existential moment, akin to what Kasparov felt in 1997 when he realized Deep Blue was going to beat him at chess …
I just sent chatgpt a script I'd written some years ago and asked for improvements. In five seconds it responded with notes as good or better than I've ever received from a film executive.
Whether AI is just a stochastic parrot, a glorified autocomplete, or something more like a human mind – I can't say. What I can say is that its capabilities are extremely impressive and appear to be getting more impressive with each successive release. Copers are starting to sound like Reg in the "What have the Romans ever done for us?" scene from Monty Python's Life of Brian. "All right, but apart from answering exam questions, reviewing legal contracts, solving math problems, writing poetry, responding with empathy and coming up with new ideas, what has AI ever done better than humans?"
Recall that ChatGPT was released in November of 2022. So we're only a few years into the new era of generative AI. Extrapolating forward, it seems eminently plausible that AI will soon be at least as smart and creative as the smartest and most creative humans. In fact, several of the academic studies cited above are probably already out of date.
Accurate as of November 2023. Source.
Although it should be obvious, I will spell out why this is so significant: AI is fast and cheap. Since it operates at lightning speed and doesn't demand tens of thousands of dollars in compensation, most intellectual tasks that are currently done by humans will be done more quickly and cheaply by AI. And this will be true even if AI never actually surpasses the smartest and most creative humans in their respective domains of expertise (even if its capabilities only increase asymptotically).
Instead of hiring a lawyer, you will tell AI to "review these documents and write-up a contract". Instead of employing a financial analyst, you will instruct AI to "analyse these data and offer some recommendations". Instead of paying for streaming services, you will prompt AI to "consult my favourite films and create a similar one". Here's how the CEO of OpenAI, Sam Altman, put it.
Prediction: AI will cause the price of work that can happen in front of a computer to decrease much faster than the price of work that happens in the physical world.
This is the opposite of what most people (including me) expected, and will have strange effects.
Which brings me to the second form of cope that I mentioned at the start: the claim that AI's effects on society will be largely or wholly positive.
Now, basic economic theory suggests that any technology that reduces the cost of "work that can happen in front of a computer" by 99% will inevitably boost GDP in the long run. Would the pie be larger if the cost of such work was higher? Surely not. So the cost being substantially lower must increase the size of the pie. However, a boost to long-run GDP isn't the end of the story.
A lot of people will be made unemployed, at least temporarily. We're talking about scores of lawyers, financial analysts, computer programmers, accountants, editors, graphic designers and so on.1
The AI researcher and superforecaster Peter Wildeford recently posted, "I'm 50% sure we're going to all be unemployed due to technology within 10 years." (Wildeford finished in the top 20 in the ACX Prediction Contest three years in a row, so his predictions are not to be sniffed at.) When pressed, he conceded he was speaking too loosely when he said "all" but insisted there will likely be "massive labor impacts".
His assessment is not unusual. Many industry leaders have warned of job losses, with Altman suggesting that "70–80 percent of existing human jobs" could be eliminated. In September of 2023, a panel of economic experts from the US and Europe were asked whether "use of artificial intelligence over the next ten years will have a negative impact on the earnings potential of substantial numbers of high-skilled workers in advanced countries". Almost none disagreed.
Given how much wealth will be generated from the precipitous fall in the cost of intellectual work, one presumes that most of the unemployed lawyers, financial analysts and computer programmers will find something else to do. We will certainly need many more nurses and social workers over the next few decades to deal with the legions of baby boomers reaching their twilight years. "Sorry the programming career didn't work out. Have you considered working in a care home?"
There's a sub-category of cope here according to which AI will merely "augment" intellectual work, rather than replacing vast swathes of it. I don't find this persuasive at all. Yes, there will probably be humans doing such work in 2035. The point is there will be markedly fewer. Even the power loom didn't completely "replace" textile workers, since it still required workers to operate. It just massively reduced demand for them. You could say of any non-fully autonomous technology that it merely "augments" work.
There are numerous occupations whose professional ranks have been decimated by technology. Before the invention of hi-fi recording, you could make a decent living as a classical singer, performing on stage, in restaurants or at private events. Today, such opportunities are vanishingly rare. Once it became possible to store a flawless recording of Pavarotti on a CD, the top singers captured almost the entire market. AI is like hi-fi recording for every intellectual occupation.
Assuming that all the unemployed workers eventually return to gainful employment, whether in care homes or elsewhere, we will still face a painful transition period. A lot of currently high-status people will experience a sudden and dramatic loss of status. And if Peter Turchin's theory of elite overproduction has any merit, this could provoke social upheaval.2
It's true that we've lived through huge changes in the labour market before, notably during the industrial revolution. However, constructing all the mills, factories, railways and bridges took literally decades, giving people time to adjust. As ever more advanced AI systems come online, their roll out will be practically instantaneous. The "strange effects" to which Altman referred may include things like your profession being viable at the start of the week but not at the end of the week.
AI is going to upend the labour market, thereby shattering status hierarchies and perhaps fomenting social unrest. Yet conservatives, for the most part, don't seem bothered. I thought they were sceptical of radical change? Right-wingers in the tech industry are understandably concerned about being replaced by foreigners on H1-B visas, but they seem positively blithe about the much more serious threat from AI. I am reminded of this observation from the Unabomber Ted Kaczynski.3
The conservatives are fools: They whine about the decay of traditional values, yet they enthusiastically support technological progress and economic growth. Apparently it never occurs to them that you can't make rapid, drastic changes in the technology and the economy of a society without causing rapid changes in all other aspects of the society as well.
Another negative consequence of AI is the loss of meaning it may engender. Bo Winegard and I already discussed this extensively in our essay responding to Steven Pinker. The gist of our argument is that humans don't just value the products of our intellect; we also value the process of applying our intellect. So far from enhancing our well-being, a world in which future civilisational advancements are largely automated could give rise to profound ennui.4
I haven't even discussed the negative consequences AI could have for human relationships.
Why, then, are conservatives coping so hard? Can they not face facts? When will they wake up and smell the threat that AI poses to much of what they claim to care about? There are several reasons, I think.
First, some conservatives genuinely subscribe to the first form of cope that I listed at the outset. They believe AI isn't really very impressive and can't really do very much – so why worry? Although I personally don't understand how someone couldn't be impressed by ChatGPT (it is already smarter than most humans), I trust the evidence adduced above may go some way to disabusing the cynics of their obtuseness.
Second, aside from mavericks like Eliezer Yudkowsky, most of the people complaining about AI are disgruntled leftists who rail against "tech bros" (people richer than them) and think data centres use too much electricity. As a result, complaining about AI has become left-coded. And just as liberal elites don't want to be associated with the unwashed masses of Middle America, aspirational conservatives don't want to be associated with the resentful (and unwashed) leftists who loathe AI.
Third, complaining about AI is not only left-coded but increasingly low-status. For a start, everyone associated with the technology seems to be wealthy and successful, which isn't particularly surprising given the cognitive demands of the field. More generally, holding opinions that might get you labelled a "Luddite" is intensely unfashionable in the US – a nouveau riche country that's always prided itself on pushing forward the technological frontier.
Fourth, complaining about something whose continued development is all but guaranteed seems rather futile. Given the real potential benefits of AI (curing diseases, supercharging growth, unlocking the mysteries of the universe) the possibility that there will be some kind of international agreement to halt its development, an AI non-proliferation treaty, is incredibly remote. So what would be the point of complaining?
This is an error, of course. Just because something is inevitable, doesn't mean you shouldn't lament the risks and downsides. Lamenting the inevitable has long been an integral part of the conservative tradition. As the great British statesman Lord Salisbury remarked, "I have for so many years entertained a firm conviction that we were going to the dogs that I have got to be quite accustomed to the expectation."
Coping is undignified. Worse than that, it's dishonest. Conservatives take pride in dealing with things as they really are, rather than as we might wish them to be. They are meant to revere truth. And the truth is that AI threatens much of what they claim to care about. It's certainly a far greater threat than the usual bugbears like transgender bathrooms. Watching conservatives cheerlead for AI because "America is winning the global race", which may not even be true, can only be described as baffling.