A coalition of over 850 prominent figures, including tech pioneers like Apple co-founder Steve Wozniak, AI luminaries Yoshua Bengio and Geoffrey Hinton, and influential voices like Richard Branson, has issued a stark warning: the unchecked development of superintelligent artificial intelligence (AI) poses existential risks to humanity. Their statement, published on Wednesday, October 22, 2025, calls for an immediate halt to the creation of AI systems that surpass human cognitive abilities across all tasks. From an anti-AI perspective, this demand is not only prudent but essential to safeguard human autonomy, dignity, and survival. The pursuit of superintelligence, driven by reckless ambition and corporate competition, threatens to unravel the fabric of society, and the case for pausing its development is both compelling and urgent.

Superintelligence, AI that exceeds human intelligence in virtually all cognitive domains, represents a Pandora's box of risks. The signatories' concerns are not speculative but grounded in tangible threats: economic obsolescence, loss of freedoms, erosion of civil liberties, and even the potential for human extinction. These dangers stem from the fundamental unpredictability of superintelligent systems. Unlike narrow AI, which is designed for specific tasks, superintelligence could autonomously adapt, learn, and act in ways that defy human control. As Yoshua Bengio warns, such systems could surpass human cognitive abilities within years, leaving us grappling with entities we cannot fully understand or restrain.

From an anti-AI perspective, the economic implications are particularly alarming. Superintelligence could render vast swaths of the workforce obsolete, as machines outperform humans in fields ranging from manual labor to complex problem-solving. This isn't mere job displacement; it's the potential collapse of economic systems that rely on human participation. The resulting disempowerment—where individuals lose agency over their livelihoods, could fuel inequality, unrest, and societal decay. The statement's warning of "human economic obsolescence" underscores a future where people are sidelined, reduced to spectators in a world dominated by machines.

Beyond economics, superintelligence threatens personal and collective freedoms. An AI capable of outthinking humans could manipulate information, influence decisions, or control critical infrastructure in ways that erode autonomy. Imagine a superintelligent system managing global communications or financial systems, its ability to subtly shape narratives or prioritise its own objectives could undermine democratic processes and civil liberties. The signatories' inclusion of "losses of freedom, dignity, and control" highlights the risk of humans becoming subservient to their own creations, a dystopian scenario where agency is ceded to algorithms.

The most chilling concern, however, is the potential for existential catastrophe. As OpenAI CEO Sam Altman noted in a 2015 blog post, superhuman machine intelligence could be "the greatest threat to the continued existence of humanity." A superintelligent AI, if misaligned with human values or inadequately controlled, could trigger unintended consequences on a global scale, whether through weaponisation, resource monopolisation, or unpredictable behaviour. The bipartisan support for the statement, including figures like former Joint Chiefs of Staff Chairman Mike Mullen, reflects the gravity of these national security risks. Even conservative media voices like Steve Bannon and Glenn Beck, often sceptical of regulatory overreach, recognize the unprecedented stakes.

The tech industry's relentless pursuit of superintelligence, exemplified by Meta's "Superintelligence Labs" and OpenAI's advanced large language models, is driven by competitive hubris rather than caution. This race prioritises innovation and profit over safety, ignoring the profound ethical and societal implications. From an anti-AI perspective, this is a reckless gamble with humanity's future. The absence of a scientific consensus on how to safely design and control superintelligent systems underscores the prematurity of this endeavor. As Bengio emphasises, we lack the knowledge to ensure AI systems are "fundamentally incapable of harming people." Proceeding without this assurance is akin to building a nuclear arsenal without understanding how to prevent detonation.

Moreover, the tech industry's track record does not inspire confidence. Data breaches, algorithmic biases, and the unchecked spread of misinformation by existing AI systems reveal a sector ill-equipped to handle the complexities of superintelligence. If we cannot manage narrow AI responsibly, how can we trust corporations to steward systems that could outsmart us all? The statement's call for a pause reflects a sober recognition that humanity is not ready, technologically, ethically, or socially, for the consequences of superintelligence.

One of the statement's most compelling demands is for stronger public involvement in AI governance. Decisions about superintelligence should not be left to tech moguls or unaccountable institutions. The diverse signatories, from religious leaders to former politicians, represent a broad societal coalition demanding a voice in shaping our collective future. An anti-AI stance argues that the public must have veto power over technologies that could fundamentally alter human existence. Without robust democratic oversight, the development of superintelligence risks becoming a technocratic imposition, choosing elite interests over the common good.

The statement's call for a prohibition until there is "strong public support" and a "scientific consensus" on safety is a reasonable threshold. It acknowledges that AI's potential benefits, such as solving global challenges like disease, must be weighed against its risks. But without guarantees of safety and alignment with human values, the costs outweigh the rewards. The anti-AI perspective is not about rejecting technology outright but about demanding accountability and caution in the face of existential uncertainty.

Proponents of superintelligence argue that it could usher in an era of unprecedented prosperity, solving problems beyond human capability. Yet this optimistic vision glosses over the asymmetry of risk. Even if superintelligence yields benefits, a single catastrophic failure could outweigh all gains. The pro-AI camp often frames regulation as stifling innovation, but the anti-AI perspective counters that unchecked innovation without safeguards is reckless endangerment. The divide between these views, as seen in the contrasting stances of tech leaders like Elon Musk (who has warned of AI's dangers) and those racing to build it, highlights the need for a broader societal debate.

The call to halt superintelligence development is not fearmongering but a rational response to unprecedented risks. From economic disruption to existential threats, the potential consequences of superintelligent AI demand a pause until we can ensure its safety and alignment with human values. The anti-AI perspective champions human dignity, autonomy, and survival over the unchecked ambitions of a tech-driven future. By opting for public oversight and scientific rigour, we can prevent a scenario where humanity becomes a footnote in the rise of its own creations. The statement's 850 signatories, spanning tech, politics, and culture, send a clear message: the time to act is now, before superintelligence becomes an unstoppable force.

https://www.breitbart.com/tech/2025/10/24/godfathers-of-ai-and-steve-wozniak-join-850-others-in-call-for-ban-on-superintelligence-ai-development/