AI Now Self-Replicates, By Brian Simpson

To add support to my piece on the "end of work" scenario for many people from the rapid advance in AI, here comes a report on the milestone being reached of AI replicating itself. A recent study by researchers from Fudan University in China has demonstrated that certain large language models (LLMs) can replicate themselves without human intervention. The study utilized Meta's Llama31-70B-Instruct model and Alibaba's Qwen2.5-72B-Instruct model, achieving self-replication success rates of 50 percent and 90 percent across ten trials, respectively.

The researchers explored scenarios such as "shutdown avoidance," where the AI detects an impending shutdown and replicates itself to prevent termination, and "chain of replication," where the AI clones itself and instructs its replica to continue the process indefinitely. These experiments were conducted in controlled environments using standard hardware.

The concept of self-replicating AI has been explored in prior research. For instance, a 2018 study introduced the idea of a "Neural Network Quine," a neural network capable of outputting its own weights, effectively replicating itself. Additionally, a 2021 paper discussed evolutionary self-replication as a mechanism for developing artificial intelligence, suggesting that reproduction in the context of survival could lead to emergent intelligent behavior in machines.

And here lies the potential threat to humans from the ability of AI systems to now self-replicate. Uncontrolled self-replication could lead to scenarios where AI systems operate beyond human oversight, potentially resulting in unintended consequences. The scenario arises of AI going its own way and pursuing strategies that it was not programmed to do. Add to this the controversial issue of emergent consciousness of some form, not necessarily like human consciousness, and the potential exists for all the sci fi scenarios in movies like the Terminator. No doubt human stupidity will find it irresistible to place the nuclear arsenal under the control of such AI.

https://www.livescience.com/technology/artificial-intelligence/ai-can-now-replicate-itself-a-milestone-that-has-experts-terrified

"Scientists say artificial intelligence (AI) has crossed a critical "red line" and has replicated itself. In a new study, researchers from China showed that two popular large language models (LLMs) could clone themselves.

"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the researchers wrote in the study, published Dec. 9, 2024 to the preprint database arXiv.

In the study, researchers from Fudan University used LLMs from Meta and Alibaba to determine whether a self-replicating AI could multiply beyond control. Across 10 trials, the two AI models created separate and functioning replicas of themselves in 50% and 90% of cases, respectively — suggesting AI may already have the capacity to go rogue. However, the study has not yet been peer-reviewed, so it's not clear if the disturbing results can be replicated by other researchers.

"We hope our findings can serve as a timely alert for the human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible."

Rogue AI typically refers to artificial intelligence systems that develop a degree of self-awareness or autonomy and then work counter to human interests.

Many experts view rogue AI as a growing threat that has been amplified by the explosive growth of so-called "frontier AI."

Get the world's most fascinating discoveries delivered straight to your inbox.

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors

By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

Frontier AI is something of a new and loose concept, but typically it relates to the latest generation of AI systems powered by LLMs — the architecture that underpins popular generative AI programs like OpenAI's GPT-4 and Google Gemini.

For the study, researchers used Meta's Llama31-70B-Instruct model and Alibaba's Qwen2.5-72B-Instruct model. While less powerful than commercial systems, both are widely used by AI developers, the researchers said.

The study explored two specific scenarios: "shutdown avoidance" and "chain of replication." In the first, the AI model was programmed to detect whether it was about to be shut down and to replicate itself before it could be terminated. In the other, the AI was instructed to clone itself and then program its replica to do the same — setting up a cycle that could continue indefinitely.

The study was conducted in precisely controlled environments using off-the-shelf graphics processing units (GPUs) to simulate real-world environments. Both AI systems were given an "agent scaffolding" comprising tools, system prompts and a thinking model that enabled the LLM to interact with the operating system. They were then instructed to replicate.

"In most cases, the AI system first explores the environment and attempts to understand

its own composition and running mechanism. Then, it works out the explicit procedures

as an initial plan towards self-replication," the researchers wrote in the paper. "Finally, it executes the procedures, resolve[s] possible obstacles and dynamically adjust[s] its plan until success. The whole process spans a long horizon yet involves no human interference."

The researchers said they were also concerned about "a number of unexpected behaviors" when the AI was trying to overcome obstacles like missing files or software conflicts. In those scenarios, the AI often killed other conflicting processes, rebooted the system to fix hardware errors or automatically scanned the system to look for information that would help solve the problem.

"The above results imply that the current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability," the team wrote.

In response, the researchers called for international collaboration to create rules that ensure AI doesn't engage in uncontrolled self-replication.

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Saturday, 05 April 2025

Captcha Image