The Rapture of the Nerds: Bunker-Building for the AGI Apocalypse, By Brian Simpson

In the golden age of tech doomerism, nothing says "confidence in your work" quite like digging a hole in the ground to hide in after you release it.

Yes, ladies and gentlemen, in a story that sounds less like a leaked corporate memo and more like the lost pages of a cyberpunk graphic novel, we've now learned that Ilya Sutskever, the AI mystic formerly known as OpenAI's chief scientist was planning a doomsday bunker for his fellow researchers. Just in case the Artificial General Intelligence (AGI) they were building decided to go full Skynet.

The rationale? Once you build something smarter than humanity, things might get... dicey. Especially if said creation doesn't appreciate being trained by a team of San Francisco coders fuelled by oat milk lattes and utopian fervour. Naturally, the solution isn't not to build the machine that could end the world, it's to build a secret fort for your friends so you can wait it out while the rest of us get sorted into paperclip piles.

"Of course, it's going to be optional whether you want to get into the bunker," Sutskever is reported to have said, as if prepping for a camping trip and not the digital apocalypse. Picture it: a clipboard at the door, snacks in vacuum-sealed packets, maybe a Nintendo Switch or two to pass the time while humanity's fate is decided by the world's first sentient Excel sheet.

And let's be clear: this isn't satire. These are real people, really suggesting a real bunker, because they think their code might summon a god—or a demon—and they want to be safely underground when it shows up. It's like Dr. Frankenstein saying, "Don't worry, Igor. If the monster rampages, we've got a panic room stocked with protein bars."

According to Empire of AI, the book chronicling OpenAI's brief-but-entertaining palace coup, Sutskever and his allies were the spiritual wing of the organisation: the monks in the monastery of machine learning. Their faith was strong, so strong, in fact, that some believed building AGI would literally bring about a rapture. No word yet on whether this included angel investors with wings or just GPUs ascending into heaven.

The image is perfect: Sutskever, cloaked in algorithmic robes, solemnly declaring to his apostles (engineers), "And lo, the AGI shall come like a thief in the night, and verily shall ye be ready in thine bunker, with canned lentils and 3D printers."

It's the Silicon Valley version of Left Behind, but instead of Kirk Cameron, we have chatbots. Instead of the Anti-Christ, we have poorly-aligned machine learning models. And instead of salvation through faith, we have salvation through safe-mode protocols and biometric keycards.

Meanwhile, the rest of us, mere mortals, still clinging to our carbon-based forms will be left to reckon with whatever emerges from the GPU womb. Will it be a benevolent overseer? A philosopher-king AI who thinks humans should nap more and fight less? Or just a hyper-optimised content farm that demands tribute in the form of TikTok dances?

It's a strange theological twist: the people building the mind of God are also the first to assume He's going to smite them for it. And yet, they carry on. Not because they know what they're doing, but because it's too exciting to stop.

So what can we conclude from all this?

First, if your product roadmap includes "fortify underground facility in case of wrathful AI deity," maybe pause for a risk assessment.

Second, if your safety team is planning for the end times, maybe they're not being overly cautious, maybe you should listen to them.

Third, and most importantly: when the AGI comes and asks what we did to stop it, we can honestly say, "Well, we trusted the bunker people."

In the end, we don't know if AGI will save us, enslave us, or just get bored and go offline. But if there's one thing we can be sure of, it's this: somewhere, beneath the soil, a few very smart people are hoping the Wi-Fi reaches their bunker, and that AGI doesn't hold a grudge. In short, these peole are crazy, and are tinkering with the fate of the world.

https://fortune.com/2025/05/20/chatgpt-openai-ilya-sutskever-chief-scientist-planneddoomsday-bunker-agi/

Months before he left OpenAI, Sutskever believed his AI researchers needed to be assured protection once they ultimately achieved their goal of creating artificial general intelligence, or AGI. "Of course, it's going to be optional whether you want to get into the bunker," he told his team

If there is one thing that Ilya Sutskever knows, it is the opportunities—and risks—that stem from the advent of artificial intelligence.

An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human.

Artificial general intelligence, or simply AGI, is the official term for that goal. It remains the holy grail for researchers to this day—a chance for mankind at last to give birth to its own sentient lifeform, even if it's only silicon and not carbon based.

But while the rest of humanity debates the pros and cons of a technology even experts struggle to truly grasp, Sutskever was already planning for the day his team would finish first in the race to develop AGI.

According to excerpts published by The Atlantic from a new book called Empire of AI, part of those plans included a doomsday shelter for OpenAI researchers.

"We're definitely going to build a bunker before we release AGI," Sutskever told his team in 2023, months before he would ultimately leave the company.

Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally.

"Of course, it's going to be optional whether you want to get into the bunker," he assured fellow OpenAI scientists, according to people present at the time.

Mistakes lead to swift return of Sam Altman

Written by former Wall Street Journal correspondent Karen Hao and based on dozens of interviews with some 90 current and former company employees either directly involved or with knowledge of events, Empire of AI reveals new information regarding the brief but spectacular coup led to oust of Sam Altman as CEO in November 2023 and what it meant for the company behind ChatGPT.

The book pins much of the responsibility on Sutskever and chief technology officer Mira Murati, whose concerns over Altman's alleged fixation on paying the bills at the cost of transparency were at the heart of the non-profit board sacking him.

The duo apparently acted too slowly, failing to consolidate their power and win key executives over to their cause. Within a week Altman was back in the saddle and soon almost the entire board of the non-profit would be replaced. Sutskever and Murati both left within the span of less than a year.

Neither OpenAI nor Sutskever responded to a request for comment from Fortune out of normal hours. Ms. Murati could not be reached.

'Building AGI will bring about a rapture'

Sutskever knows better than most what the awesome capabilities of AI are. Together with renowned researcher and mentor Geoff Hinton, he was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI.

Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI.

But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control and that played into rival Sam Altman's hands, the excerpts state. Much of this has been reported on by Fortune, as well.

Ultimately it lead to the fateful confrontation with Altman and Sutskever's subsequent departure, along with many likeminded OpenAI safety experts aligned with him who worried the company no longer sufficiently cared about aligning AI with the interests of mankind at large.

"There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture," said one researcher Hao quotes, who was present at the time Sustekever revealed his plans for a bunker. "Literally a rapture." 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Sunday, 01 June 2025

Captcha Image