The Stagnation of Ideas: Cultural Decay and the West’s Reliance on Technology, By Professor X
The observation that the 21st century seems bereft of new, transformative ideas, relying instead on the intellectual giants of the 20th century like Neil Postman, Alasdair MacIntyre, and Leszek Kołakowski, points to a deeper malaise in Western culture. Thinkers like Postman predicted a Huxleyan future where information overload and entertainment numbs society into apathy, a prophecy that feels eerily accurate today. Yet, the question remains: why are we stuck recycling insights from decades past? This essay argues that the West's cultural decay, characterised by a stifling of free creative expression, fear of risk, and an overreliance on technological progress, has created an intellectual stagnation that threatens its vitality. Technology, while a lifeline holding the West together, is a fragile crutch that could precipitate collapse if it falters.
The West's cultural landscape has increasingly become a battleground where free expression is curtailed, not just by overt censorship but by a pervasive climate of conformity and fear. The rise of cancel culture, social media pile-ons, and institutional gatekeeping has made it risky for thinkers to challenge prevailing orthodoxies. As Vijay Jayaraj noted in his July 21, 2025, article on CO₂ benefits, scientists face blacklisting or job loss for questioning mainstream narratives. This dynamic extends beyond science to philosophy, literature, and social theory, where dissenting voices are sidelined or silenced.
The intellectual giants of the 20th century, MacIntyre, Kołakowski, Arendt, operated in a world where bold, contrarian ideas could still find a platform, even if controversial. Today, the pressure to align with dominant ideologies, whether on climate, technology, or social issues, discourages risk-taking. Universities, once hotbeds of debate, have become echo chambers, with diversity of thought often sacrificed for ideological purity. A 2023 study by the Foundation for Individual Rights and Expression, found that 66% of U.S. college students feel some topics are too controversial to discuss openly, a trend mirrored in Western academia broadly. This stifles the kind of creative friction that birthed After Virtue or Amusing Ourselves to Death.
Postman's 1985 warning in Amusing Ourselves to Death, that a deluge of information and cheap entertainment would erode critical thought, has materialised in the digital age. Social media, streaming platforms, and 24/7 news cycles bombard individuals with fragmented, contextless data, leaving little room for deep reflection. The average attention span has dropped to 8 seconds, according to a 2015 Microsoft study, making it harder for complex, original ideas to gain traction. Instead, culture rewards viral soundbites and recycled tropes over substantive innovation.
This entertainment-driven culture values comfort over challenge. The algorithms that dominate platforms like X, amplify content that confirms biases or provokes outrage, not ideas that push boundaries. The result is a feedback loop where shallow engagement trumps the slow, rigorous work of developing new philosophical frameworks. Kołakowski's 1986 essay Modernity on Endless Trial warned of a "godless world" sunk into nihilism; today, that nihilism manifests as a cultural obsession with distraction over meaning.
The West's shift toward secular materialism, has stripped away the transcendent frameworks, religion, shared moral codes, that once fuelled intellectual creativity. MacIntyre's After Virtue diagnosed this moral confusion, arguing that the loss of a shared ethical foundation leaves society adrift in emotivism, where personal feelings trump universal truths. Without a higher purpose, intellectual pursuits become utilitarian, focused on immediate outcomes rather than profound questions.
This loss of purpose discourages the kind of big-picture thinking that defined 20th-century philosophy. Kołakowski, Miłosz, and others grappled with the existential fallout of secularism and totalitarianism, producing works that resonated across generations. Today, the absence of such unifying struggles, replaced by fragmented culture wars, leaves thinkers without a clear adversary or mission, resulting in derivative rather than groundbreaking ideas.
The assertion often made, that "the only thing holding the West together is technology" is both insightful and alarming. Technology, particularly advancements in AI, communication, and infrastructure, has become the West's economic and cultural backbone. The digital economy, powered by tech giants like Google and Amazon, accounts for over 10% of U.S. GDP, per a 2024 BEA report, while innovations like mRNA vaccines and renewable energy sustain public health and energy needs. Yet, this reliance masks a deeper fragility.
AI is a supposedly radical development rooted in 17th-century Cartesian dreams of reducing knowledge to mathematical precision. As Herbert Dreyfus argued in hisbook What Computers Can't Do, AI's promise rests on flawed assumptions about human cognition, ignoring intuitive, non-computational aspects of thought. Despite decades of hype, AI remains a tool for optimisation, not a source of new philosophical insights. Large language models like those powering chatbots recycle existing knowledge, not create it, reinforcing the intellectual stagnation of the era.
If AI stumbles, through ethical failures, economic disruption, or technical limitations, the West's overreliance could be catastrophic. A 2025 OECD report warns that AI-driven job displacement could affect 27% of jobs in advanced economies, potentially destabilising social cohesion if not managed carefully. Without new ideas to guide its integration, AI risks amplifying cultural decay rather than resolving it.
Technology's dominance also creates vulnerabilities. Cyberattacks, like the 2024 SolarWinds hack, expose the fragility of digital infrastructure. Supply chain disruptions, as seen during the 2020–2022 chip shortages, highlight dependence on globalised tech production. If these systems falter, the West's economic and social stability could unravel, especially without a robust cultural or intellectual framework to fall back on.
The lack of new ideas stems from a confluence of factors:
1.Cultural Conformity: Fear of social or professional repercussions stifles bold thinking. The risk of being "cancelled" or marginalised, discourages the kind of radical critique that defined 20th-century thought.
2.Fragmentation of Discourse: Digital platforms fragment attention and choose viral content over depth, making it harder for cohesive, transformative ideas to emerge.
3.Loss of Shared Purpose: Secular materialism has eroded the moral and spiritual foundations that once inspired intellectual giants, leaving a void filled by triviality.
4.Overreliance on Technology: The West's focus on technological solutions diverts energy from philosophical or cultural innovation, creating a false sense of progress.
Some argue that the 21st century has produced new ideas, pointing to thinkers like Yuval Noah Harari or concepts like effective altruism. However, Harari's work often synthesises existing ideas, drawing heavily from 20th-century frameworks, rather than breaking new ground. Effective altruism, while influential, is rooted in utilitarian principles articulated by Peter Singer in the 1970s. These examples, while interesting, lack the paradigm-shifting impact of a MacIntyre or Kołakowski.
Others might claim that technology itself is a source of new ideas, with AI and biotechnology opening new ethical and philosophical frontiers. Yet, the core debates around AI were framed by Turing and Dreyfus decades ago. Biotechnology raises questions of human nature, but these echo 20th-century concerns about eugenics and transhumanism. The lack of novel frameworks to address these issues underscores the stagnation.
To break this intellectual drought, the West must revive the conditions that foster creativity:
Protect Free Expression: Policies and cultural norms must safeguard dissent, allowing thinkers to challenge orthodoxies without fear. Platforms like X could amplify diverse voices, but only if algorithms rank substance over sensationalism.
Reclaim Depth: Education and media should value long-form thinking over soundbites, encouraging engagement with complex ideas. Reviving liberal arts traditions, as MacIntyre advocated, could nurture this.
Rediscover Purpose: While secularism need not be reversed, the West must find shared values, whether through civic nationalism or ethical frameworks, to inspire meaningful inquiry.
Balance Technology: Technology should serve, not dominate, human creativity. Investments in philosophy, arts, and humanities could counterbalance the tech-centric focus.
In conclusion, the West's intellectual stagnation, as seen in our reliance on 20th-century thinkers, reflects a culture of decay where free expression is stifled, distraction reigns, and purpose is lost. Technology, while propping up the West's economy and infrastructure, is a fragile foundation that cannot replace the need for new ideas. Without a cultural renaissance that values creativity and depth, the West risks collapse if its technological crutch stumbles. Kołakowski's warning of a "godless world" sunk into nihilism remains apt, but the solution lies not in nostalgia but in fostering the conditions for a new generation of thinkers to emerge. Only then can the West move beyond déjà vu and cultural decay and chaos. This means challenging the universities who have led the charge into social decay.
https://thecritic.co.uk/why-we-have-no-new-ideas/
"Last week, the Times columnist James Marriott tweeted what he called "one of the most prophetic paragraphs of the twentieth century" — a snippet from Neil Postman's 1985 book Amusing Ourselves to Death. In it, Postman predicts that the near future will be a Huxleyan rather than Orwellian nightmare: a world in which would-be tyrants have no need to ban books, since people — numbed by too much information and zonked out on cheap entertainment — no longer read anyway.
It is, indeed, prescient. But it also got me thinking: why do we keep going back to Postman? Indeed, why do we keep going back to Alasdair MacIntyre, René Girard, Leszek Kołakowski, Hannah Arendt, Czesław Miłosz, Christopher Lasch, Zygmunt Bauman, and all the other heavyweights of the second half of the twentieth century? Have we really not come up with any fresh ideas since?
It does not seem as if the twenty-first century has offered any insights that weren't better articulated decades ago
Sadly, I don't think we have. There's plenty of good writing, of course. But when it comes to the biggest questions — technology, multiculturalism, the decline of religion, the waning of democracy — it does not seem as if the twenty-first century has offered any insights that weren't better articulated decades ago. When Alasdair MacIntyre died a couple of months back, commentators rushed to agree that his 1981 book After Virtue gave us the authoritative account of the morally confused world we live in. This wasn't just opportunism. After Virtue, in my opinion, really did get its diagnosis of modern ethics spot on! I've made the same unoriginal point myself. Similarly, I often feel, when I sit down to write, that the most fruitful thing I could do would be just to compose a single sentence: "Go pick up a book of Kołakowski's essays".
As I see it, the problem is largely historical. With most of the big issues we face, what we're really discussing is, at root, our underlying worldview — essentially, secular materialism — refracted through the smaller, immediate prisms of the moment. The seeds of that underlying worldview were planted in the seventeenth century, blossomed towards the end of the nineteenth, and were harvested — often with grave consequences — in the twentieth. In other words, the ramifications of our new intellectual settlement were clear to many by the midpoint of the last century (and to some, much earlier still), leading, in the second half, to a furious period of both creativity and criticism. There is, for the most part, now precious little to add.
Take artificial intelligence — on the face of it, the most radically new development of our time. Fantasies about artificial intelligence go back, in some form, at least as far as the Greek myths. But in its modern form, the story begins in the seventeenth century, with Descartes. Descartes dreamed of making all human knowledge as precise and as certain as maths. His ambition was methodological: true knowledge, he thought, could only be achieved if we restricted ourselves to reasoning, like mathematicians, in a step-by-step manner, from certain premise to certain premise. He wrote:
Those long chains, composed of very simple and easy reasonings, which geometers customarily use to arrive at their most difficult demonstrations, had given me occasion to suppose that all the things which come within the scope of human knowledge are interconnected in the same way. And I thought that provided that we refrain from accepting as true anything which is not, and always keep to the order required for deducing one thing from another, there can be nothing too remote to be reached in the end or too well hidden to be discovered.
Our picture of reality was to be constructed like a jigsaw puzzle: starting with a single piece, we would add only immediately adjacent pieces, one by one, checking each time that we had chosen the right one, until we had fleshed out the full, glorious image of the universe.
This dream was only possible, however, if the world, and the mind comprehending it, really could be reduced to simple building blocks. This, obviously, aligned neatly with the growing bias at the time towards atomistic theories of the physical world. But philosophers began to dream of analysing human thought, too, down to granular, discrete chunks. Leibniz spoke of an "alphabet of human thoughts". Hume tried to explain sense perception as the cumulative result of millions of isolable atoms of experience. Hobbes wrote: "By reasoning I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract".
By the nineteenth century, the logician George Boole spoke confidently of a "mathematics of the human intellect". Charles Babbage, meanwhile, invented the first mechanical computer, the "difference engine". These two lines naturally converged. At the 1956 Dartmouth Conference at which the term "artificial intelligence" was first actually coined, the organisers claimed that human "intelligence can in principle be so precisely described that a machine can be made to simulate it".
All to say: the central arguments for artificial intelligence, and therefore the key arguments against it, were well-established by about 1960. Alan Turing, Marvin Minsky, John McCarthy, and many others had already by then laid out sophisticated (if, in my opinion, wrong) philosophical arguments for why a computer ought to be, in principle, capable of replicating the human mind. In 1965, the RAND corporation invited the philosopher Herbert Dreyfus to produce a counter-case, which resulted in his paper "Alchemy and Artificial Intelligence", later expanded into a book, What Computers Can't Do, published in 1972.
Dreyfus argues that the dream of humanlike AI rests, ultimately, on a set of faulty philosophical assumptions: i.e., that the world can be reduced to atomic physical facts, that human reasoning can be reduced to a set of explicit rules, and that both can be in principle described perfectly precisely. These assumptions are, of course, really just those that emerged in the seventeenth century. They are the same assumptions that Blaise Pascal challenged, at the very time, in his Pensées, when he distinguished between the esprit de géométrie (mathematical thought) and esprit de finesse (intuitive or perceptive thought): "Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous.… the mind… does it tacitly, naturally, and without technical rules."
You could argue that it took until the twentieth century for the full force of these arguments — on both sides — to land. Turing, Minsky, and Shannon understood and explored the logical implications of our underlying worldview with astounding clarity and inventiveness. Husserl, Heidegger, Wittgenstein, and other critics of reductionism understood, in turn, the limitations of such a worldview. What is harder to argue is that, for all the surface developments in AI, anybody has added much to the arguments since (sorry Yuval). Even the wackier strands of technological utopianism, like transhumanism, go back to at least the 1970s and 80s — just look up FM-2030.
It's possible that all of this is about to change, and that genuinely new ideas are around the corner. But we do seem to have stumbled into an unusually inert moment in history when universal truths about humanity — that we don't really know what to do without religion, that we have an unquenchable desire to abstract ourselves away from experience, that we hubristically believe ourselves to be fully capable of dominating nature — have become painfully clear. I am reminded, not for the first time, of Kołakowski's brilliant essay "Modernity on Endless Trial", published in 1986:
We experience an overwhelming and at the same time humiliating feeling of déjà vu in following and participating in contemporary discussions about the destructive effects of the so-called secularisation of Western civilisation, the apparently progressive evaporation of our religious legacy, and the sad spectacle of a godless world. It appears as if we suddenly woke up to perceive things which the humble, and not necessarily highly educated, priests have been seeing — and warning us about — for three centuries and which they have repeatedly denounced in their Sunday sermons. They kept telling their flocks that a world that has forgotten God has forgotten the very distinction between good and evil and has made human life meaningless, sunk into nihilism. Now, proudly stuffed with our sociological, historical, anthropological and philosophical knowledge, we discover the same simple wisdom, which we try to express in a slightly more sophisticated idiom.
Perhaps, to sound a little Whiggish, we really have reached a historic peak of self-awareness — self-awareness about our perennial flaws, at least — and now, on that front, there just isn't that much more to add. If so, the question is what, if anything, we can usefully do with that knowledge.
Comments