While there is concern by those who are monitoring developments in artificial intelligence, such as ChatGPT, and its forthcoming more powerful successors, a new threat has predictably emerged, that of hacking programs to mirror ChatGPT, such as FraudGPT, EvilGPT, DarkBard, WolfGPT, XXXGPT and WormGPT, which are available on the Dark web, a domain of much illegal content, and criminal activity.
These programs, such as WormGPT, are already being used for sophisticated cyber-attacks upon Australian businesses. The supposed creator of WormGPT, a 23-year-old programmer who calls himself “Last,” is open about the use of WormGPT for illegal activities and cyber-attacks: “Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.” What is alarming about this, is if this program was invented by someone 23-years old, what could a really dedicated band of cyber-terrorists get up to? And indeed, there is the coming cyber threat from communist China in the coming war as well, which could be a blackout of essential electronic infrastructure in Australia.
Phishing emails in different languages, are being created and sent out, which allows cyber-crooks to commit identity theft and compromise access to systems. Here, older Australians may be vulnerable, as they may tend to trust a convincing email. When I went to the bank last, I heard an old lady anxiously proclaim that her account had had money, not all of it, but most, removed after she gave bank account details following an email. So, it will be necessary to exercise extreme caution with emails, especially any asking for bank details. As I understand it, no Australian bank or financial institution does this, and they do not need to. Assume guilty until proven innocent with any suspicious emails.
As for what can be done in general about runaway AI; that is difficult given that technology is the religion of this society, and also given the cyber-arms race with communist China and Russia, it is unlikely that some kind of restraint will be placed upon AI. Thus, we will need to live in a state of vigilance.
“Cybersecurity researchers are sounding an alarm about the hacking community’s answer to ChatGPT, a new generative AI tool dubbed WormGPT, which is being used to create sophisticated attacks on Australian businesses.
WormGPT is being described as similar to ChatGPT, but with no ethical boundaries or limitations, and researchers say hundreds of customers have already paid for access to the tool on the dark web.
A 23-year-old Portuguese programmer, “Last”, describes himself as the creator of WormGPT, and pitches it as a piece of technology that “lets you do all sorts of illegal stuff and easily sell it online in the future”.
“Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home,” Last said in an online post on the dark web, in which he sold access to the tool.
While businesses are still excited about the productivity benefits generative AI can bring, industry figures are warning that the new technology is set to unleash a wave of innovative cyberattacks against businesses and individuals.
Patrick Butler, managing partner at Australian cyber firm Tesserent, said that malicious parties were signing up to criminal forums to rent access to WormGPT and using it to craft convincing phishing emails in different languages, which then allowed them to commit identity theft and compromise systems access.
While phishing emails were often characterised by poor spelling or grammar, generative AI could create emails with impeccable English, Butler said, and tools such as WormGPT could be used by attackers with limited technical skills.
“We’re seeing malicious generative AI being used to create new malware variants that are more difficult for some traditional tools to detect,” Butler said. “These platforms can even assist criminals in exploiting published vulnerabilities.
“While some legitimate AI tools can be used to conduct software code reviews, developers should be discouraged from doing this as their code may be used to train AI models that criminals gain access to, giving them further intelligence into organisational systems.”
Butler said the number of different threat actors would likely escalate as generative AI made it easier for criminals to access cyberattack tools. He said the Tesserent Security Operations Centre had already found an increase in phishing campaigns and malicious email activities targeting Australian organisations, particularly in the months following the emergence of WormGPT and similar tools.
There are now at least six different generative AI tools available to rent or purchase on the dark web, including FraudGPT, EvilGPT, DarkBard, WolfGPT, XXXGPT and WormGPT with more appearing, according to Butler.
“While most lack the large capacity of public-facing tools like ChatGPT and Bard, they are proliferating quickly, which can make them harder to find and take down.”
Scott Jarkoff, director of intelligence strategy, APJ & META, at CrowdStrike, said cybersecurity activity had risen amid the conflict in the Middle East, meaning businesses should be even more vigilant than usual.
He said hacking groups from the so-called “big four” of Russia, China, North Korea and Iran had been using generative AI tools to craft attacks in perfect English.
“The Israel-Hamas conflict is now giving criminals a perfect lure to say ‘hey, visit this site to donate to whichever cause you believe in’, and that means it’s now more important that everyone takes cybersecurity more seriously,” he said.
“We all take safety seriously, why do we not take cyber seriously? We’ve got to get to a point where cyber hygiene is built into everyone’s muscle memory, just as safety is built into everyone’s muscle memory.”
Generative AI is not only being used to create realistic phishing emails. It’s also supercharging social engineering, with bad actors using AI to create realistic fake accounts to spread misinformation, according to Dan Schiappa, chief product officer at cyber vendor Arctic Wolf.
China recently arrested a man for using ChatGPT to create a fake news story of a train derailment, and he will be far from the last person to use the technology to create chaos, Schiappa said.
“The long-standing ‘arms race’ between cyberattackers and cybersecurity practitioners has left both sides with new opportunities to act faster than ever before using AI,” he said.
The positive for the cybersecurity industry and for Australian businesses is that generative AI tools can be used by security personnel, or “good guys”, to identify new vulnerabilities and defend themselves more quickly.
“‘Good guys’ are leveraging the tech to find anomalies or patterns in system access records, sniffing out intrusion attempts that otherwise might have gone undetected without AI,” Schiappa said.
“As defenders, we need the power to harness that ability to defend organisations without allowing massive corporations to run wild with no restrictions on their research and development.
“Recent research has even noted that organisations using AI to help defend themselves resolved breaches nearly two-and-a-half months quicker than organisations not using AI or automation, and saved $3 million more in breach costs than those not using the technology.”