China's military brain trust has sounded the alarm on their own sci-fi fever dream: humanoid robots designed to replace soldiers on the battlefield. According to an op-ed in the People's Liberation Army Daily, these mechanised warriors could go full Terminator, unleashing "indiscriminate killings and accidental death" if not reined in by some serious ethical and legal guardrails. Because nothing screams "don't worry, we've got this" like a Communist superpower fretting over its own killer robots.

The authors, Yuan Yi, Ma Ye, and Yue Shiguang, point out that these militarised humanoids are already flipping the bird to Isaac Asimov's First Law of Robotics: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Apparently, programming a robot to wield an assault rifle doesn't exactly scream "respecting humans." The trio's solution? Overhaul Asimov's laws, because evidently, the rules penned by a sci-fi writer in 1942 aren't quite cutting it for 2025's dystopian battlefield.

Let's pause to appreciate the irony. China, a nation not exactly known for fretting over moral nuances, is now clutching its pearls over the prospect of rogue robots mowing down friend and foe alike. The PLA Daily warns that these machines, if not tightly leashed, could violate the laws of war by failing to "obey humans," "respect humans," or "protect humans." Translation: they're worried their shiny new toys might not just salute the right flag before opening fire. And yet, the same article admits that these robots lack the speed, dexterity, and terrain-navigating chops to fully replace human soldiers or even other unmanned systems. So, we're building killing machines that might go haywire but aren't even that good at it? Bold strategy!

Meanwhile, across the Pacific, the U.S. Army is diving headfirst into its own robot romance. Scientists at the Army Research Laboratory are working on "interactive bi-directional communication systems" to make robots "more intuitive, responsive, and, ultimately, more useful for the Soldier." Because what's better than a robot that can kill? A robot that can kill and take orders in real-time. The U.S. is banking on human-machine collaboration, envisioning a buddy-cop dynamic where soldiers and robots trade witty banter while dodging bullets. But let's be real: when your wingman is a walking algorithm with a grenade launcher, "trust issues" take on a whole new meaning.

The Chinese op-ed's call for "ethical and legal research" to avoid "moral pitfalls" is almost quaint in its optimism. Picture a room full of bureaucrats debating whether a robot should say "sorry" before it accidentally vaporises a village. The authors insist robots need constraints to "suspend and limit excessive use of force," but anyone who's ever dealt with a buggy software update knows that "constraints" are only as good as the code behind them. One misplaced semicolon, and your friendly neighbourhood robot could turn a peacekeeping mission into a scene from an apocalypse movie.

And then there's the bigger question: what happens when these robots get hacked? China's already exporting dirt-cheap robot dogs with cameras and microphones for as low as $540 a pop. If a consumer-grade robodog can be turned into a spy, imagine what a hostile actor could do with a militarised humanoid packing guns? A "sleeper army" of compromised robots could wreak havoc, and the PLA's own warnings suggest they're not entirely confident in their cybersecurity. After all, if your robot army can be hijacked by a teenager with a laptop, you've got bigger problems than Asimov's outdated rulebook.

The U.S. isn't exactly sitting pretty either. Posts on X highlight the global race to arm AI, with China's PLA openly discussing humanoid robots as a "new frontier" in warfare. Meanwhile, retired General Mark Milley has predicted that up to a third of the U.S. military could be robotic within a decade. That's right: in 2035, your friendly neighbourhood drill sergeant might be a Roomba with a rocket launcher. And while the U.S. talks up "human-machine collaboration," China's vision is more… let's say, autonomous. Their goal of "intelligentisation" involves AI clusters running the show, potentially sidelining human commanders altogether. Because nothing says "strategic victory" like letting a neural network call the shots.

So, what could go wrong? Oh, just a few minor hiccups: robots misinterpreting orders, robots getting hacked, robots deciding "indiscriminate" is just a fancy word for "efficient." The PLA's own newspaper admits that these machines could lead to "legal charges and moral condemnation," which is a polite way of saying "war crimes, but with extra circuits." And yet, both superpowers are charging full speed into this brave new world, because nothing screams "progress" like building a robot that can accidentally start World War III.

In the end, the race for terminator troops is less about who wins and more about who doesn't lose everything. China's calling for ethical oversight while simultaneously building the very machines they're worried about. The U.S. is preaching collaboration but prepping for a future where robots outnumber recruits. And somewhere, Isaac Asimov is shaking his head, muttering, "I told you so." Maybe it's time to dust off those old sci-fi novels, not for inspiration, but for a warning. Because when your robot army starts dreaming of electric slaughter, the only thing left to do is hope they don't wake up.

https://www.zerohedge.com/military/china-warns-rogue-robot-troops-unleashing-terminator-style-indiscriminate-killings

"Concerns are mounting in China as the Communist superpower advances humanoid robot development to replace human soldiers on the battlefield, prompting calls for "ethical and legal research" into this Terminator-like technology to "avoid moral pitfalls."

An op-ed published by Yuan Yi, Ma Ye and Yue Shiguang in the People's Liberation Army (PLA) Daily warned that faulty robots could lead to "indiscriminate killings and accidental death," which would "inevitably result in legal charges and moral condemnation."

The South China Morning Post reports:

The authors said that militarised humanoid robots "clearly violate" the first of Asimov's laws, which states that a robot "may not injure a human being or, through inaction, allow a human being to come to harm". They added that Asimov's laws needed to be overhauled in the light of these developments.

They also highlighted legal implications, saying that humanoid robots in military scenarios should comply with the main principles of the laws of war by "obeying humans", "respecting humans" and "protecting humans".

The authors emphasized that robots must be designed with constraints to "suspend and limit excessive use of force in a timely manner and not indiscriminately kill people." Additionally, the trio cautioned against hastily replacing humans with robots, noting that robots still lack essential capabilities such as speed, dexterity, and the ability to navigate complex terrains.

"Even if humanoid robots become mature and widely used in the future, they will not completely replace other unmanned systems," the article said.

Concurrently, the U.S. Army is intensifying efforts to integrate robotics, artificial intelligence, and autonomous systems, aiming to enhance human-machine collaboration between soldiers and advanced robots on the battlefield, according to Interesting Engineering.

Scientists at the U.S. Army Combat Capabilities Development Command Army Research Laboratory (DEVCOM ARL) are pioneering advancements in ground and aerial autonomous systems, as well as energy solutions, to bolster the mobility and maneuverability of these technologies, the technology website reports.

"We are bridging the gap between humans and robots, making them more intuitive, responsive, and, ultimately, more useful for the Soldier," said a lead researcher for the Artificial Intelligence for Maneuver and Mobility program. "ARL researchers have demonstrated an interactive bi-directional communication system that enables real-time exchanges between humans and robots."