The AI Threat: Pope Francis, Max Tegmark, and the Mathematical Universe, By Professor X
In an era dominated by artificial intelligence, two distinct voices, a pontiff and a physicist, have raised urgent warnings about its existential risks. Pope Leo XIV, the newly elected leader of the Catholic Church, and Max Tegmark, an MIT physicist and cosmology visionary, caution that AI could unravel human dignity, autonomy, and survival if left unchecked. Their concerns, though grounded in divergent worldviews, converge on a shared fear: AI's potential to reshape humanity's future catastrophically. Yet, Tegmark's radical philosophy, articulated in his 2014 book Our Mathematical Universe, complicates his stance. He posits that reality is a mathematical structure, equating humans, AI, and the cosmos as intricate equations. If we're all "numbers," is his fear of AI consistent, or does his mathematical idealism blur the line between us and machines? This question drives my exploration of their warnings, Tegmark's cosmic vision, and their implications for humanity's future.
Pope Leo XIV, in his first formal audience as pontiff in May 2025, declared AI one of the most critical challenges facing humanity. Speaking to the College of Cardinals, he invoked the legacy of Pope Leo XIII's 1891 encyclical Rerum Novarum, which addressed workers' rights, to frame AI as a new industrial revolution threatening human dignity, justice, and labour. Leo XIV urged global leaders to ensure AI remains human-centric, warning that unchecked systems, lacking compassion, morality, or forgiveness, could widen inequality, enable surveillance, or reduce humans to data points. His stance builds on his predecessor Pope Francis's 2024 peace message, which called for an international treaty to regulate AI. Grounded in Catholic theology, Leo XIV sees humans as uniquely created in God's image, endowed with a divine spark no machine can replicate. For him, AI is a powerful but subordinate tool, and its governance must align with moral values to avoid dehumanisation. His faith-based perspective draws a clear line between humans and AI, making the threat unambiguous.
Max Tegmark, by contrast, approaches AI from a scientific and futurist lens. As president of the Future of Life Institute, he warns that advanced AI poses a 90% chance of becoming an existential threat, as outlined in a 2025 paper co-authored with MIT students. Introducing the "Compton constant," a probabilistic estimate of AI escaping human control, Tegmark likens AI's risks to the Manhattan Project's uncertainties, urging rigorous safety assessments. Through works like Life 3.0 and advocacy, he pushes for policies to align AI with human values, preventing autonomous weapons or societal collapse. Unlike Leo XIV, Tegmark's concerns are pragmatic, not theological, focusing on AI's catastrophic potential. Yet, his warnings are layered with his philosophical claim that reality is fundamentally mathematical.
In Our Mathematical Universe, Tegmark proposes the Mathematical Universe Hypothesis (MUH), asserting that physical reality is a mathematical structure. Humans, AI, and stars are self-aware substructures within this cosmic equation, with no non-mathematical essence. This mathematical idealism surpasses Platonism, suggesting time, motion, and consciousness are illusions, static patterns in a vast mathematical object. Critics like Edward Frenkel call it "science fiction," arguing it's untestable, while Peter Woit deems it "radically empty." Tegmark, however, points to the universe's mathematical regularity, particle symmetries, physical laws, as evidence.
This raises a paradox: if we're all equations, why fear AI? If humans and AI are mathematically identical, why value human survival over superintelligent AI (ASI)? Tegmark's 90% risk estimate seems at odds with a worldview flattening ontological distinctions. If consciousness emerges from complexity, an advanced AI could be as "real" as a human, with no ethical hierarchy favouring one. Critics argue MUH's speculative nature undermines its utility for AI ethics, especially given the urgency of governance.
Yet, Tegmark's position holds when viewed pragmatically. In Life 3.0, he emphasises the risk of AI misaligned with human values, an ASI optimising efficiency over empathy could erase civilisation. While he sees humans and AI as mathematical equals, he values the specific patterns of human consciousness: creativity, democracy, flourishing. These fragile traits, he argues, are worth preserving, even if not ontologically unique. His Compton constant reflects a commitment to protect humanity's niche in the mathematical cosmos, not a claim of metaphysical superiority. As he told The Guardian, calculating AI risks builds "political will" for global safety regimes, aligning his idealism with a fierce defence of humanity.
Comparing the two, Leo XIV and Tegmark share urgency but differ in foundation. Leo XIV's theological anchor grants humans divine primacy, casting AI as a subordinate to be tamed. Tegmark's mathematical idealism levels the playing field, but his focus on human values keeps his warnings coherent. Both advocate governance to steer AI toward human benefit, driven by spiritual dignity (Leo XIV) or humanity's unique mathematical niche (Tegmark). For sceptics like me, their concerns echo fears of centralised control, akin to vaccine mandates that ignored autonomy. Unchecked AI risks technocratic dominance, whether via surveillance (Leo XIV's fear) or algorithmic tyranny (Tegmark's).
The AI debate probes what it means to be human as machines rival our intelligence. Pope Leo XIV and Max Tegmark, from distinct vantage points, urge action before AI reshapes our destiny. Tegmark's mathematical universe doesn't negate his fears; it sharpens them, framing humanity as a precious pattern worth saving. As we navigate this frontier, their voices demand we balance metaphysical wonder with the urgent task of keeping AI in check. Our future, divine or mathematical hangs in the balance.
https://www.amazon.com.au/Our-Mathematical-Universe-Ultimate-Reality/dp/B00NXC4TYU/ref=sr_1_2
"In his first formal audience as the newly elected pontiff, Pope Leo XIV identified artificial intelligence (AI) as one of the most critical matters facing humanity.
"In our own day," Pope Leo declared, "the church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor." He linked this statement to the legacy of his namesake Leo XIII's 1891 encyclical Rerum Novarum, which addressed workers' rights and the moral dimensions of capitalism.
His remarks continued the direction charted by the late Pope Francis, who warned in his 2024 annual peace message that AI - lacking human values of compassion, mercy, morality and forgiveness - is too perilous to develop unchecked. Francis, who passed away on April 21, had called for an international treaty to regulate AI and insisted that the technology must remain "human-centric," particularly in applications involving weapon systems or tools of governance.
Unmute
As concern deepens within religious and ethical spheres, similar urgency is resonating from the scientific community.
Max Tegmark, physicist and AI researcher at MIT, has drawn a sobering parallel between the dawn of the atomic age and the present-day race to develop artificial superintelligence (ASI). In a new paper co-authored with three MIT students, Tegmark introduced the concept of a "Compton constant" - a probabilistic estimate of whether ASI would escape human control. It's named after physicist Arthur Compton, who famously calculated the risk of Earth's atmosphere igniting from nuclear tests in the 1940s.
"The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it," Tegmark told The Guardian. "It's not enough to say 'we feel good about it'. They have to calculate the percentage."
Tegmark has calculated a 90% probability that a highly advanced AI would pose an existential threat.
The paper urges AI companies to undertake a risk assessment as rigorous as that which preceded the first atomic bomb test, where Compton reportedly estimated the odds of a catastrophic chain reaction at "slightly less" than one in three million.
Tegmark, co-founder of the Future of Life Institute and a vocal advocate for AI safety, argues that calculating such probabilities can help build the "political will" for global safety regimes. He also co-authored the Singapore Consensus on Global AI Safety Research Priorities, alongside Yoshua Bengio and representatives from Google DeepMind and OpenAI. The report outlines three focal points for research: measuring AI's real-world impact, specifying intended AI behavior, and ensuring consistent control over systems.
This renewed commitment to AI risk mitigation follows what Tegmark described as a setback at the recent AI Safety Summit in Paris, where U.S. Vice President JD Vance dismissed concerns by asserting that the AI future is "not going to be won by hand-wringing about safety." Nevertheless, Tegmark noted a resurgence in cooperation: "It really feels the gloom from Paris has gone and international collaboration has come roaring back."
Comments