By John Wayne on Monday, 26 February 2024
Category: Race, Culture, Nation

AI Gurus Predict a Terminator Style Apocalypse By Brian Simpson

There are a number of AI experts, some founding fathers of AI such Eliezer Yudkowsky, who are predicting an apocalyptic disaster from AI developments, happening in a few years' time, the doom soon hypothesis. This does not merely relate to the social impacts of general AI advancing replacing humans in workplace, but more by super intelligence developing, leading to what happens in sci fi movies such as the Terminator series, with AI using nukes against humanity (placed under their control by the elites), or creating a real pandemic, by AI-thought out bioweapons.

It is worth noting their case, even if viewed as unlikely, since if one got information that a team of home invaders were planning to raid one's house at 4 am tomorrow morning, some precautions would naturally be taken. The Guardian has an interesting piece along this broad theme, about the neo-Luddite movement which is critical of present technological developments, especially in AI and biotech gain-of-function research in far-from secure bioweapons labs. I would count myself along with these technological pessimists. Unfortunately, we are in a minority and near powerless. Technology worship is the cult of our age, as it delivers the magic that wizards and medicine men of old sought to do. To some degree though, the general public has, after the Covid plandemic, seen some of the dark side of technology, with the emergence of technocracy, but the agenda is very far advanced by now.

https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse

"Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience – taking things slowly for a novice like me – that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines. "The difficulty is, people do not realise," Yudkowsky says mildly, maybe sounding just a bit frustrated, as if irritated by a neighbour's leaf blower or let down by the last pages of a novel. "We have a shred of a chance that humanity survives."

It's January. I have set out to meet and talk to a small but growing band of luddites, doomsayers, disruptors and other AI-era sceptics who see only the bad in the way our spyware-steeped, infinitely doomscrolling world is tending. I want to find out why these techno-pessimists think the way they do. I want to know how they would render change. Out of all of those I speak to, Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope. He is the lead researcher at a nonprofit called the Machine Intelligence Research Institute in Berkeley, California, and you could boil down the results of years of Yudkowsky's theorising there to a couple of vowel sounds: "Oh fuuuuu–!"

"If you put me to a wall," he continues, "and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." By "remaining timeline", Yudkowsky means: until we face the machine-wrought end of all things. Think Terminator-like apocalypse. Think Matrix hellscape. Yudkowsky was once a founding figure in the development of human-made artificial intelligences – AIs. He has come to believe that these same AIs will soon evolve from their current state of "Ooh, look at that!" smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail. Don't imagine a human-made brain in one box, Yudkowsky advises. To grasp where things are heading, he says, try to picture "an alien civilisation that thinks a thousand times faster than us", in lots and lots of boxes, almost too many for us to feasibly dismantle, should we even decide to.

Trying to shake humanity from its complacency about this, Yudkowsky published an op-ed in Time last spring that advised shutting down the computer farms where AIs are grown and trained. In clear, crisp prose, he speculated about the possible need for airstrikes targeted on datacentres; perhaps even nuclear exchange. Was he on to something?

Along way from Berkeley, in the wooded suburb of Sydenham in south London, a quieter form of resistance to technological infringement has been brewing. Nick Hilton, host of a neo-luddite podcast called The Ned Ludd Radio Hour, has invited me over for a cup of tea. We stand in his kitchen, waiting for the kettle to boil, while a beautiful, frisky greyhound called Tub chomps at our ankles. "Write down 'beautiful' in your notebook," encourages Hilton, 31, who as well as running a podcast company works as a freelance journalist. He explains the history of luddism and how – centuries after the luddite protesters of an industrialising England resisted advances in the textile industry that were costing them jobs, destroying machines and being maligned, arrested, even killed in consequence – he came to sympathise with its modern reimagining.

"Luddite has a variety of meanings now, two, maybe three definitions," says Hilton. "Older people will sometimes say, 'Ooh, can you help me with my phone? I'm such a luddite!' And what they mean is, they haven't been able to keep pace with technological change." Then there are the people who actively reject modern devices and appliances, he continues. They may call themselves luddites (or be called that) as well. "But, in its purer historical sense, the term refers to people who are anxious about the interplay of technology and labour markets. And in that sense I would definitely describe myself as one."

Edward Ongweso Jr, a writer and broadcaster, and Molly Crabapple, an artist, both based in New York, define themselves as luddites in this way, too. Ongweso talks to me on the phone while he runs errands around town. We first made contact over social media. We set a date via email. Now we let Google Meet handle the mechanics of a seamless transatlantic call. Neo-luddism isn't about forgoing such innovations, Ongweso explains. Instead, it asks that each new innovation be considered for its merit, its social fairness and its potential for hidden malignity. "To me, luddism is about this idea that just because a technology exists, doesn't mean it gets to sit around unquestioned. Just because we've rolled out some tech doesn't mean we've rolled out some advancement. We should be continually sceptical, especially when technology is being applied in work spaces and elsewhere to order social life."

Crabapple, the artist luddite, broadly agrees. "For me, a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that's introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it's shaped by power, and it's generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they're dumb? That was concocted by bosses."

Where a techno-pessimist like Yudkowsky would have us address the biggest-picture threats conceivable (to the point at which our fingers are fumbling for the nuclear codes) neo-luddites tend to focus on ground-level concerns. Employment, especially, because this is where technology enriched by AIs seems to be causing the most pain. Lorry drivers have their mileage minutely tracked, their rest hours questioned. Desk workers may sit in front of cameras that snap pictures at random intervals, ensuring attendance and attention. You could call these workplace efficiencies. You could call them gross affronts. Guess which the luddites would argue. Labour rights go to the very historical core of this movement.

Hilton called his podcast The Ned Ludd Radio Hour to honour a man who might have lived about 250 years ago or might never have lived at all. As Hilton has explained on his show, Ned Ludd is thought to have been a textile worker living in the English Midlands in the late 1770s. It's said he smashed a few weaving machines after being flogged for his idleness on the job. Something about the smashing might have resonated with his peers. As Hilton has explained: "Within a few decades, the veracity of Ludd's identity would be lost for ever, but the name would live on. The luddites became an organised band of frame-breakers in the 1810s. They fought the Industrial Revolution… and they lost. They lost big time. In fact they lost so badly that the reality of their name became a victim of [obfuscation]."

The history of the luddite rebellion is taught in British schools – but confusedly, in a way that allowed for at least some of us, me included, to come away with an idea that to be a luddite is to be naive or else fearful and monk-ish. As Hilton walks me through from his kitchen to his lounge, a room busy with the interconnected equipment he uses to make his podcasts, he feels the need to apologise. By at least one definition of the word, "I live a very not-luddite life," Hilton says, gesturing helplessly at open laptop, wireless earbuds, towering mic. "My work is tech-based. I can't avoid it. I don't claim to be some person living in the woods. But I am anxious. I feel things fraying."

It is this premonition of a fraying that has brought others to a modern version of luddism. An academic called Jathan Sadowski was one of the first to knit together anxieties about our quickening tech revolution with the anxieties of those weavers who took a stand against the infringements of an earlier machine age. "Luddism is founded on a politics of refusal, which in reality just means having the right and ability to say no to things that directly impact upon your life," Sadowski tells me when we speak. "This should not be treated as an extreme stance, and yet in a culture that fetishises technology for its own sake, saying no to technology is unthinkable."

At least, that was the case until 2023 – a year in which ChatGPT (developed by a company called OpenAI), Bard (developed by Google) and other user-friendly AIs were embraced by the world. At the same time, image generators such as Dall-E and Midjourney wowed people with their simulacrum photos and graphic art. "They won't be replacing the prime minister with ChatGPT or the governor of the Bank of England with Bard," Hilton has said on his podcast. "They won't be swapping out Christopher Nolan for Dall-E or Martin Scorsese for Midjourney, but fat will be cut from the great labour steak."

There's a sense that this is now in the realm of the possible, to reject outright parts or uses of a technology without looking foolish

In January 2023, a display of AI-generated landscapes, projected on to the wall of a gallery in Vermont, was vandalised with the words "AI IS THEFT". Creative professionals were starting to feel exploited. Masses of uncredited, unpaid-for human work was being harvested from the internet and repurposed by clever generative AIs. In spring 2023, Crabapple organised an open letter that called for restrictions on this "vampirical" practice. There were more open letters including one that called for a six-month pause on the development of any new AIs.

There were instances of direct action, some serious, some tongue-in-cheek or halfway between. In Los Angeles, opponents of those omnipresent Ring camera doorbells distributed "Anti Ring" stickers to be gummed over the lenses of the devices. A group of San Franciscans calling themselves Safe Street Rebel started seizing traffic cones and placing them on the bonnets of the city's self-driving cars, a quick way of confusing the cars' sensors and rendering them inoperable. Brian Merchant, a writer who last year published Blood in the Machine, a history of luddism, appeared at an event with Safe Street Rebel in November 2023. In front of cheering Californians, he staged a "luddite tribunal", smashing devices the crowd deemed superfluous.

"There's a sense that this is now in the realm of the possible, to actually reject outright parts or uses of a technology without looking foolish," Merchant tells me. As we speak, he is preparing for another tribunal, this time at a bookshop called Page Against the Machine.

Maybe luddism is the answer. As far as I can make out, talking to all these people, it isn't about refusing advancement, instead it's an act of wondering: are we still advancing our relish of the world? How queasy or unreal or threatened do we need to feel before we stop seeing these conveniences as convenient? The author Zadie Smith has joked in the past that we gave ourselves to tech too cheaply in the first instance, all for the pleasure, really, of being a moving dot on a useful digital map. Now bosses can track their workers' every keystroke. Telemarketing firms put out sales calls with AI-generated voices that mimic former employees who have been let go. A few weeks back, in January, the largest-ever survey of AI researchers found that 16% of them believed their work would lead to the extinction of humankind.

There is no cloud. There are vast datacentres sucking up water, electricity and rare-earth metals, literally boiling up the planet

"That's a one-in-six chance of catastrophe," says Alistair Stewart, a former British soldier turned master's student. "That's Russian-roulette odds." I meet Stewart, who is 28, outside the London headquarters of Google's AI division. In what I would consider a pretty strange comms effort, Google has just commissioned some outdoor art to ease public fears about the current pace of machine learning. It's a confusing display. One of the artworks depicts a vista of lush green hills, cosy lakeside houses – and, behind all this, a vast smoking mushroom cloud. "Scientists are using AI to create more stable and efficient [nuclear] fusion reactors," an info panel reads. Cool?

It's the stuff of dread for Stewart. He has taken part in protests against AI development, at one point unfurling a banner outside this Google building that called for a pause on the work going on inside. Not a lot of people joined him on that protest. Stewart understands. AIs, invisible and decentralised, swarming between datacentres that are spread around the world, are hard to conceptualise as possible threats, at least when compared with issues such as the climate crisis or animal welfare, the visceral effects of which can be seen and felt. "It doesn't always keep me up at night," Stewart says of the latent danger he perceives. "I don't personally feel anxiety on a day-to-day basis. And that's part of the problem. Me, with all of my resources and education – I still struggle to form an emotional connection to this problem." Last year, he published a blogpost that pondered next steps, listing "occupation of AI offices", "performative vandalism of AI offices" and even "sabotage of AI computing infrastructure" as possible forms of resistance." 

Leave Comments