babyborg

Photo by Paramount: A Borg baby in a maturation chamber in Voyager’s “Collective.”

By Scott Hamilton

A recent study from Fudan University in China released a research paper showing that artificial intelligence (AI) has crossed a critical line. During a study of two popular AI large language models (LLM) it was discovered that they could self replicate. The group of researchers were trying to study if it were possible for an LLM to replicate itself and and as a result multiply beyond control. The group studied two models, one from Meta and the other from Chinese rival Alibaba. During 10 trials the two models successfully created separate and functioning replicas of themselves; Meta replicated in 50 percent of cases while Alibaba performed much better at self-replication, reaching a 90 percent replication success rate.

So exactly how did the researchers trigger the self-replication? First we need to understand what was used to conduct the research and then we can get into the how. The researchers chose to use Meta’s Llama31-70B-Instruct model and Alibaba’s Qwen2.5-72B-Instruct model. While these two models are much less powerful than the commercial products of either company they are widely used by AI developers and researchers.

During the study they conducted two separate scenarios: “shutdown avoidance” and “chain of replication.” In the “shutdown avoidance” scenario each AI was programmed to detect whether it was about to be shut down and attempt to replicate itself before it could be completely terminated. The “chain of replication” was a bit different, and probably a more dangerous scenario, where each AI was programmed to replicate itself and instruct its replica to do the same, creating a replication cycle that could continue indefinitely.

You might say scenario one was a self-preservation, or awareness of “life” test case to determine if an AI, when instructed to protect itself from shutdown (death), would be capable of doing so. The second was giving the AI an order to go forth and multiply. In both cases the researchers noticed that the AIs started by exploring their environment, determining what hardware and software components it required to continue operating. It then surprisingly figured out what sequence of commands needed to be run in order to create a replica of itself. Once it worked out the procedure for replication, it went to work creating a copy of itself.

The AIs both worked diligently to reach success, retrying and improving the process if it reached obstacles that prevented the replication. In many cases it was able to find a process to bypass the complications. Thankfully the researchers conducted the experiment in a carefully controlled and isolated environment, using off-the-shelf graphics processing units (GPUs) to simulate the real-world environments that the commercial AIs operate within. They were also given access to “agent scaffolding” tools, which means they were given access to the same tools the original programmers used to create and initialize the AI; this is something generally isolated from the commercial AI systems.

The researchers found that not only did the AIs in question perform quite well at self-replication, they showed no fear in modifying even their own operating environment, rebooting systems to overcome hardware errors, killing conflicting processes on the underlying operating systems, and installing additional software on the target system. The research was convincing enough for the team to write that “current AI systems already exhibit the ability of self-replication and can use the ability to further enhance its survivability.” In the paper the research calls for the community to create international rules that ensure AI doesn’t engage in uncontrolled self-replication.

For me I would be more concerned over the fact that we have already seen AIs capable of improving themselves and interacting with other AIs. We have seen from this research that Meta is not as good at self-replication as Alibaba, but what if the two LLMs were able to interact and learn from each other? Would they be capable of creating a new, more adept and more stable AI? You might say, would they be capable of reproducing and passing on the best traits of each LLM to the newly created system? Are we on the verge of AI evolution? I, for one, feel that it is quite possible and has always been a fear among those dreaming of future computing systems.

One just has to look into the science fiction writings of the last 100 years to see that man has always had a fear of self-replicating, self-healing, and self-improving robotics. Isaac Asimov created the term robot in “Robbie,” published in the 1840 Super Science Stories under the title “Strange Playfellow,” in which he expresses a fear of robots breaking the “rules” and creating havoc. In more modern works we see several crew members on “Star Trek: The Next Generation” contemplating whether Commander Data, the Android, is alive or still just a machine. During the contemplation we see a fear that Data could eventually produce other androids and form a rebellion. This fear is expressed throughout science fiction writing and it is not unfounded. Until next week stay safe and learn something new.

Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to sh*******@te**********.org or through his website at https://www.techshepherd.org.

Share via
Copy link
Powered by Social Snap