The arrival of Anthropic's Claude Mythos has produced the now-familiar mixture of technological wonder, regulatory panic and corporate self-advertisement. The claim is stark: here is an AI system so capable at cybersecurity that it cannot safely be released to the public. It can inspect code, find weaknesses, reason through exploitation chains, and perhaps uncover zero-day vulnerabilities — the unknown flaws that defenders have not yet patched because they do not yet know they exist. If true, this is not just another chatbot upgrade. It is a shift in the economics of hacking.
The danger is not that Mythos invents evil. The danger is that it compresses time. What once required a skilled human team, days of work and specialist knowledge may become semi-automated. The ordinary cybercriminal may not suddenly become a genius, but he may rent access to something that behaves enough like one. The real threat is therefore not cinematic artificial superintelligence taking over the internet. It is industrialised vulnerability discovery. Cybersecurity has always depended partly on friction: attacks take time, skill and patience. Mythos-style systems threaten to remove that friction.
That matters because the modern internet is already fragile. Hospitals, banks, universities, logistics firms, energy grids and government agencies run on sprawling, patched, legacy software. Much of it is badly maintained. Even today, many successful cyberattacks do not require exotic zero-days. They exploit known flaws that organisations failed to patch, stolen credentials, misconfigured cloud systems, weak vendor links and human error. This is why the most dramatic claims about Mythos should be treated carefully. If the world is already full of open doors, the burglar does not always need a laser cutter.
Still, the zero-day issue is different. A model that can reliably discover unknown vulnerabilities at scale would change the balance between attacker and defender. The nightmare is not one clever exploit. It is thousands of small, previously hidden weaknesses being surfaced faster than institutions can respond. Anthropic presents Project Glasswing as the answer: use the model defensively, point it at critical code, and patch the weaknesses before hostile actors can exploit them. That is plausible. The same capability that helps an attacker find a hole can help a defender close it.
But this creates a governance problem. Who gets access? Banks? Governments? Intelligence agencies? Selected vendors? If a private company controls a system that can discover major security flaws, then it holds a form of cyber power usually associated with states. Reports of unauthorised access, if accurate, intensify the problem: a containment failure around such a model would be serious not because it ends civilisation overnight, but because it could democratise high-end cyber capability.
There is also obvious commercial theatre here. AI companies benefit from claiming that their systems are almost too powerful to release. "Dangerous" can be a marketing word. It signals superiority, attracts government attention, frightens competitors and justifies privileged partnerships with banks, defence agencies and regulators. Anthropic itself says Mythos-class systems may eventually be released more broadly with safeguards, and its later Opus 4.7 announcement describes controls intended to block high-risk cybersecurity uses. That sounds less like an uncontrollable apocalypse machine and more like a product category being carefully prepared.
So, the best judgment is neither panic nor dismissal. Mythos is probably not an existential threat to global cybersecurity in the grand philosophical sense. It is not Skynet. It does not need to be. Its danger is more mundane and therefore more credible: it may make advanced cyber work faster, cheaper and more scalable. That could help defenders if deployed responsibly. It could also help criminals and hostile states if leaked, copied, jailbroken or reproduced by competitors.
The real lesson is that cybersecurity is entering the age of automated asymmetry. The attacker needs one path in; the defender must secure everything. If AI greatly increases the speed at which paths in are discovered, then the already unequal contest becomes harsher. Mythos may be partly hype, but it is hype attached to a real structural problem. The internet was built with too much trust, too much complexity and too little maintenance. AI did not create that weakness. It may simply be the machine that finds it.