By John Wayne on Wednesday, 11 March 2026
Category: Race, Culture, Nation

Doom in a Petri Dish: When Technology Forgets its Boundaries, By Brian Simpson

 In a development that sounds like it belongs in an episode of Black Mirror, an Australian biotech firm, Cortical Labs, has reportedly trained clusters of human brain cells grown in a laboratory to interact with the classic video game Doom.

At first glance this may sound like little more than a bizarre scientific curiosity. But the experiment sits at the frontier of what researchers call biological computing, and it raises questions that go far beyond video games or laboratory novelty. The deeper issue is whether our technological culture is once again racing ahead of its ethical instincts.

The breakthrough builds on earlier experiments in which similar clusters of neurons—grown in laboratory dishes and sometimes described as "mini-brains" — were taught to interact with the arcade game Pong. In the new work, researchers exposed roughly 800,000 to one million living human neurons to electrical signals from a computer interface. Over time the neural cluster began responding in ways that allowed it to influence actions inside the game environment.

Technically speaking, the experiment is impressive. It demonstrates that living neural tissue can adapt to external stimuli in ways that resemble learning. Some scientists see this as an early step toward a new form of computing that blends biological neurons with silicon hardware. In theory, such systems could eventually become extremely energy-efficient learning machines.

But technological capability is not the same thing as wisdom.

The modern research establishment has developed a habit of presenting every frontier experiment as an inevitable step forward. Questioning the direction of technological progress is often dismissed as backward-looking or anti-science. Yet history repeatedly shows that societies which fail to think carefully about the moral implications of new technologies often regret that failure later.

The uneasy feeling surrounding these experiments comes from the fact that they blur a boundary most people instinctively believe should exist: the boundary between tools and living human tissue.

The neurons used in these experiments are derived from human cells. Scientists emphasise that the clusters are extremely simple and lack the structure necessary for anything resembling consciousness. But even if that is true today, the trajectory of the research raises obvious questions about where it might lead tomorrow.

If a cluster of neurons can learn to play Pong today and interact with Doom tomorrow, what happens when researchers begin scaling up the complexity of these biological systems?

At what point does a laboratory neural culture stop being a mere biological component and start resembling something closer to a primitive nervous system?

These questions are not the fantasies of science fiction writers. They are the logical consequences of a research field whose explicit goal is to merge biological intelligence with machine systems.

For technological enthusiasts, the answer is simple: continue advancing until the limits are discovered. But this mindset reflects a particular cultural attitude toward technology — one that treats progress as an end in itself.

From a more conservative perspective, the purpose of technological development is not simply to push boundaries but to improve human flourishing while preserving human dignity. That requires restraint as well as ingenuity.

Modern societies already struggle with technologies that reshape human life faster than social institutions can adapt. Social media, algorithmic surveillance, and artificial intelligence have all produced consequences that few anticipated when they were first introduced.

Biological computing raises even deeper questions because it touches on the nature of human identity itself. Human beings are not merely biological components to be rearranged into useful machines. A civilisation that begins treating human tissue as a raw material for computational systems risks gradually eroding the very concept of human dignity.

There is also the cultural dimension. For centuries Western civilisation emphasised ideals of individual agency, responsibility, and moral autonomy. These values depend on recognising the uniqueness of human beings as moral actors rather than simply biological mechanisms.

Technologies that treat fragments of the human brain as programmable computing units subtly push in the opposite direction. They reinforce a worldview in which human beings are ultimately reducible to networks of neurons that can be engineered, manipulated, and optimised like any other technical system.

This perspective may appeal to certain strands of modern technocratic thinking, which tends to view society itself as an engineering problem. But it sits uneasily with older traditions that place moral limits on what science should attempt to do.

None of this means research should stop entirely. Curiosity and experimentation are part of the scientific enterprise. But there is a difference between exploring nature and remaking it without serious reflection.

Experiments like those conducted by Cortical Labs may eventually lead to valuable insights about neuroscience or computing. Yet they also illustrate a deeper tendency within modern technological culture: the assumption that if something can be done, it eventually will be done.

That assumption is not a law of nature. It is a choice.

And societies that fail to make thoughtful choices about technology often discover that the most important decisions were made long before anyone realised they were being made.

Teaching neurons in a petri dish to play Doom may sound like a clever scientific stunt. But it also serves as a reminder that technological progress is never purely technical.

It is always, whether we acknowledge it or not, a moral question as well.