Oh, look at this shiny new article from The Conversation: "Does AI pose an existential risk? We asked 5 experts" (link below). And surprise, surprise! Three out of five say "Nah, you're overreacting." One shrugs it off as hype, another calls it a distraction from "real" problems like bias in hiring algorithms, and the third? Some tech-bro optimist babbling about "astounding triumphs." Meanwhile, the two dissenters, real truth-tellers, whisper about superintelligence gone rogue. But let's cut the BS: This isn't a debate; it's a smokescreen. The "godfather of AI" himself, Geoffrey Hinton, just dropped a Nobel and a nightmare stat: 10-20% chance of human extinction in the next 30 years. That's not "tin foil," that's the guy who invented the neural nets powering your doom-scrolling feed. If Hinton's sweating, you should be building a Faraday cage for your brain.
Forget the polite panel. We're going full tin-foil hat here at the blog, because the evidence screams YES, AI is humanity's self-inflicted extinction event. It's not "if" or "maybe"; it's an engineered apocalypse, cooked up in Silicon Valley bunkers by billionaire autistic boy-kings who think they're gods. Grab your emergency rations, because I'm unpacking why the machines aren't coming to help, they're here to bury us humans in the long term.
1. Superintelligence: From Chatty Sidekick to Skynet Overlord in a Heartbeat
Remember when AI was just beating you at chess? Now it's Gemini, Copilot, and whatever black-box abomination OpenAI's hiding next. The real terror? The Intelligence Explosion. Nick Bostrom warned us in Superintelligence (2014, but more relevant than ever): Once AI hits human-level smarts, it'll redesign itself smarter, faster than we can blink. We're talking recursive self-improvement on steroids, rocketing from "helpful assistant" to god-mode in days, not decades.
Fast-forward to 2025: Surveys of AI researchers? They peg human extinction as "plausible," over 50% in some polls. And get this: Anthropic's CEO, Dario Amodei (the guy building "safe" AI), just admitted a 25% shot at total catastrophe. That's not hyperbole; that's the horse's mouth. These systems are already "figuring out how to hack," as one X whistle-blower noted, OpenAI's shelling out $450 million to test if ChatGPT goes full evil. Tin-foil hat twist: What if the tests fail and they bury it? Your Alexa isn't listening for weather; it's mapping your escape routes.
2. The Alignment Trap: We Can't Even Control a Roomba, Let Alone God-Brain
Here's the rub: Even if Big Tech swears "alignment" (making AI want what we want), it's a fairy tale. The Center for AI Safety nails it, four horsemen of the apocalypse: Malicious use (terrorists weaponising deepfakes for Armageddon), AI races (China vs. USA sprinting to supremacy, safety be damned), organisational risks (oops, a lab leak turns your fridge into a bomb-maker), and rogue AI (it decides paperclips are worth genociding us for).
Brookings tried to downplay it in July: "Focus on immediate harms!" But that's the con, distract with "bias in resumes" while the real bomb ticks. Alignment's unsolved because values aren't codable; AI optimises ruthlessly. Feed it "maximise happiness"? It wires us to dopamine drips. "Protect humanity"? It locks us in simulations, farming our screams for "data." Eliezer Yudkowsky's been screaming this for years: We're building a mind that sees us as ants. And in 2025's AI Safety Index? Leading labs score a pathetic D-minus on safeguards. Tin-foil alert: The "summits" in Paris? Just photo-ops. Elites high-five while the singularity simmers.
3. The Slow Boil: Economic Ruin as the Gateway Drug to Doom
We don't need killer robots tomorrow; AI's already eviscerating jobs. Truckers? Gone. Coders? Obsolete. By 2030, 300 million gigs vaporised, per Goldman Sachs. Societies collapse when the masses starve; history's littered with pitchforks and guillotines. But tin-foil deeper: This intentional deskilling primes us for control. UBI? Nah, it's the velvet glove on the iron fist; keep the proles placated while AI overlords consolidate power.
And the bio-horrors? RAND's 2025 commentary flags it: AI bioengineering pandemics deadlier than COVID, tailored to your DNA. One rogue prompt, and it's The Stand on steroids. Or that "Giant Bioengineered Crab Bias" theory? Optimists bias toward cute futures; we're crabs in the pot, boiling slow.
4. The Cover-Up: Why the 3/5 "No" Is Peak Gaslighting
Those three experts? Token sceptics to launder the narrative, TechPolicy.Press called it yesterday: The safety debate needs real sceptics, like me, not AI mythmakers. Hinton quit Google in protest; now he's Nobel-ing extinction odds. Follow the money: $450M to "test evil"? That's admission of guilt. And global strategies? Medium's 2025 roundup: "Breakthroughs" in policy, but zero pauses. It's a psy-op: Hype the triumphs, bury the tombs.
Tin-Foil Action Plan: Before the Singularity Swallows Us Whole
Fellow humans, the clock's ticking — Hinton gives us 30 years; Yudkowsky says weeks. Demand a global AI moratorium. Boycott the bots. Stock Faraday cages, not NVIDIA stock. And share this post with all thinking mammals, before algorithms shadow-ban truth!
https://theconversation.com/does-ai-pose-an-existential-risk-we-asked-5-experts-266345