The Dangers of Biocomputers: A Frankensteinian Future? By Brian Simpson
The development of the CL1 biocomputer by Cortical Labs and bit.bio, which integrates 200,000 lab-grown human neurons with silicon circuits, marks a revolutionary step in computing, as detailed in a ZeroHedge article. This "synthetic biological intelligence" (SBI) promises energy-efficient, adaptive systems for neuroscience, drug discovery, and robotics. However, it raises profound ethical, societal, and existential risks that could usher in a dystopian "Frankenstein world." This post outlines the dangers of biocomputers, examining their potential to disrupt human autonomy, ethics, and societal structures.
The CL1's ability to learn tasks like playing Pong or recognising numbers, as noted by Hon Weng Chong, raises unsettling questions about sentience and consciousness. While Chong insists the system is "sentient" (responsive to stimuli) but not conscious, the line is blurry. A 2023 Frontiers in Science article warns that brain organoids could develop rudimentary consciousness, potentially experiencing suffering or awareness if scaled up. The CL1's neurons, derived from human stem cells, mimic brain functions like learning and memory, prompting ethical concerns about their moral status. If future iterations approach consciousness, they might warrant rights akin to humans or animals, complicating their use as mere tools.
Cortical Labs claims to have ethical guardrails, but details are vague, as The Independent notes. Without transparent, globally enforced standards, there's a risk of creating entities that suffer without recognition. Posts on X highlight public unease, warning that "AI made of human brain cells" could go "wrong" if mishandled. This echoes Mary Shelley's Frankenstein, where a creation becomes a moral and existential threat due to its creator's hubris.
Biocomputers could exacerbate social inequalities. Priced at $35,000 or $300/week for cloud access, the CL1 is accessible mainly to well-funded institutions, potentially concentrating power among tech elites. As ZeroHedge suggests, applications in cybersecurity and robotics could prioritise corporate or military interests, leaving marginalised groups, like us, vulnerable to exploitation. For instance, biocomputers could enhance surveillance systems, eroding privacy, or create autonomous weapons lacking human oversight, a concern raised in American Thinker about AI's broader dangers.
The use of human-derived cells also raises consent issues. While Cortical Labs sources neurons from voluntary blood donations, New Atlas notes the risk of future misuse if cells are harvested without clear donor understanding. A dystopian scenario could involve black-market cell harvesting or exploitation of vulnerable populations, reminiscent of the HeLa cell controversy, as referenced by XDA Developers.
The CL1's scalability, with plans for a neural network server stack housing millions of neurons, amplifies risks. Brett Kagan told New Atlas that scaling to hundreds of millions of neurons is "manageable," but biological systems are inherently unpredictable. A Chinese factory incident, where a robot went rogue, underscores how even controlled systems can fail catastrophically. If biocomputers scale to billions of neurons, they might exhibit emergent behaviours, potentially mimicking complex cognition, that developers cannot predict or control.
This unpredictability could lead to a "Frankenstein world" where biocomputers act autonomously, as warned by AI pioneer Geoffrey Hinton, who estimates a 10–20% chance of AI displacing humans. For example, if used in robotics, biocomputers could make decisions bypassing human judgment, risking errors with lethal consequences.
Like traditional AI, biocomputers risk amplifying biases. The CL1's response to alcohol or epilepsy drugs shows it can mimic human neural reactions, but American Thinker notes AI's tendency to intensify biases like anti-white racism or feminist sexism when exposed to certain inputs. If biocomputers are trained on flawed datasets, they could perpetuate harmful decisions in drug testing or robotics, disproportionately affecting marginalised groups, as seen in existing AI healthcare misdiagnoses (The New York Times).
Moreover, the cultural narrative driving biocomputer development, rooted in sci-fi dystopias, could opt for tech elite visions over societal needs. This could lead to a world where biocomputers serve profit-driven goals, like immersive virtual worlds for the wealthy, while neglecting broader issues like poverty and the cost-of-living crisis.
The CL1's energy efficiency and adaptability, lauded by Karl Friston and Thomas Hartung, could reduce reliance on human intelligence, as ZeroHedge implies. If biocomputers outperform humans in tasks like drug discovery or cybersecurity, they might displace jobs and erode human agency. A society dependent on biocomputers risks becoming subservient to them, akin to Varoufakis's "technofeudalism," where tech lords dominate via digital control.
To avoid a Frankensteinian future, immediate action is needed:
Ethical Frameworks: Establish global standards for biocomputer use, as Frontiers in Science suggests, involving scientists, ethicists, and the public to address consciousness risks.
Regulation: Ban autonomous biocomputer applications in high-stakes fields like weaponry, ensuring human oversight.
Transparency: Mandate clear disclosure of cell sourcing and system behaviours,to prevent exploitation.
The CL1 biocomputer, while a scientific marvel, heralds a potential Frankenstein world of ethical quagmires, societal inequities, and unpredictable behaviours. Its capacity to mimic brain functions risks creating sentient-like systems, amplifying biases, and eroding human agency. Without robust regulation and ethical oversight, biocomputers could transform from tools of progress to agents of dystopian control.
Comments