The orchard is a jungle now
Anthropic's superhuman hacker model has shifted the AI conversation from "what if it fails?" to "what if it works?". For CTOs, the answer is thorny.
“Are they seriously asking us to wait until they figure out if we need to rewrite all software?” was a phrase I heard repeatedly last week. Claude Mythos, Anthropic’s latest model with unparalleled hacking capabilities, has everyone on edge. Facing a massive maintenance bill, CTOs are understandably anxious to grasp the scale of the problem and begin taking action.
Independent bodies like Britain’s AISI have confirmed Mythos’ capabilities: a hacker of superhuman abilities exists, albeit behind lock and key. Even a small chance that all software will need to be upgraded en masse has prompted the industry to shift from its usual gung-ho attitude to a more measured release process.
The playbook will surely evolve, especially as governments wake up to the threat involved, and the question of who gets early access intensifies. Nonetheless, there are 3 early takeaways for business:
IT leaders must prepare now for increased threats
In the medium term, AI should improve security
Long term, leaders will see tech as less of an orchard and more of a jungle
I. Safety-first release process
We are lucky that the first model of this power is an expert on cybersecurity. Software is easy to patch, isolate and replace. Had the model been an expert in biotech, our ability to respond would have been very limited. As it stands, Mythos allows us to create a playbook for releasing powerful models to the world.
This playbook is beginning to take shape. First, the labs keep the model behind closed doors. Then, a select number of “systemic” companies gain access to patch vulnerabilities in their core software. The circle widens to include accredited professionals and, finally, the wider public.
The most powerful models might never be openly available. As they are both expensive to run and dangerous to use, access will likely be restricted to users who have completed KYC-style identity checks. Some models may even be restricted to businesses that adhere to standards akin to Biosafety levels.
As self-regulation inevitably gives way to a hard regulatory framework, what does this mean for software security?
II. Reasons to be hopeful
Deployed carefully, AI can be a more powerful shield than a sword. It will empower the security industry to address existing vulnerabilities and test for new ones before software is released. AI can monitor for attacks round the clock, help strengthen defences, fend off attacks and even expose malicious actors.
Robust cybersecurity tools should also become more widely accessible. The current array of tools – penetration tests, malware detectors, voluntary standards – is too narrow in scope. AI-powered defenders can allow even smaller companies to test and defend their tech stack more effectively.
There is no shortage of things that can go wrong. Access to the preview releases could be compromised, or open-source AI of equal power could be developed before critical systems are patched. The long tail of legacy software components that underpin many critical systems could lead to mass outages, data losses or theft.
Deployed carefully, AI can be a more powerful shield than a sword.
I side with the more optimistic version, that AI will make software more secure in the medium term. However, businesses should take extra care to mitigate the impact on their operations during the transition.
III. Security hygiene
Raising alarms on cybersecurity can be a self-fulfilling prophecy. Given the publicity generated, it is likely that a wave of attacks will materialise the day that Mythos is released. Whilst the Labs are promising guardrails around the model’s most advanced capabilities, stories of unauthorised access are already emerging.
According to the AISI, Mythos poses the largest threat to environments with weak security posture. The case for reviewing your security hygiene has never been more pressing: strengthen passwords, enforce MFA everywhere, review access and accounts, patch software and train staff.
Beyond the basics, companies need to audit and harden their tech stack. Mission-critical and customer-facing systems need to be strengthened and closely monitored. Unauthorised “shadow IT” should be culled or isolated. Everything in between – utilities, mid-priority apps, prototypes – needs to be investigated and lined up for patching.
IV. Welcome to the jungle
Raising defences against armies of AI-powered hackers requires a completely different attitude to tech. Forced to take a more defensive stance, enterprise IT will have a smaller footprint, with budgets redirected from new tools to defending the existing ones.
The effects on IT systems will be significant. Software will appear more paranoid, exasperating many users. Vendor selection will skew toward maintainability rather than features. Security Operations (SecOps), already a staple in software development companies, will find its way into even smaller organisations’ IT departments.
The SaaSpocalypse might not be driven by AI’s ability to create software, but by its power to infiltrate it.
Mythos’s biggest impact is that technology is no longer an orchard to pick fruit from, but a jungle to protect yourself against. When every added piece of software can provide an attack vector, the SaaSpocalypse might not be driven by AI’s ability to create software, but by its power to infiltrate it.
V. Takeaways for leaders
Mythos is indeed a watershed moment in AI and cybersecurity. CTOs shouldn’t wait for Mythos’s public release to improve security hygiene and deal shadow IT, as the sheer publicity will drive a flurry of attacks.
The medium-term outlook is hopefully better. AI-powered defence tools will become accessible to companies that currently can’t afford organised SecOps. The security gap between large enterprises and SMEs should narrow.
The era of frictionless SaaS adoption is over. Every new tool is a potential attack vector. Vendor selection, IT footprints and budgets will all look different in five years.
The Mythos saga epitomises the law of unintended consequences. The CTOs impatiently awaiting updates already sense the shift: the SaaS orchard they go to pick opportunities is turning into a jungle.
Recommended reading
Anthropic Research: Mythos Preview
Anthropic’s own account of what Mythos can do — and, pointedly, what it has chosen not to let it do yet. Essential context for understanding the safety-first release logic and the self-interest behind it.
The New York Times: Anthropic’s Mythos AI
The mainstream take on Mythos is a useful measure of how the story is perceived beyond the tech bubble. The coverage itself is part of the self-fulfilling prophecy problem the article describes.
AISI: Our Evaluation of Claude Mythos Preview’s Cyber Capabilities
Britain’s AI Safety Institute provides the independent verification that Anthropic’s own claims needed. The key finding — that Mythos is most dangerous when security is lax — should focus minds in any IT department.
Anthropic: Glasswing: Responsible Capability Release
The framework Anthropic uses to manage staged access: labs first, systemic companies next, accredited professionals last. Whether this holds as commercial pressure mounts is the question the article leaves open.
Bloomberg: Mythos Is Being Accessed by Unauthorised Users
The release process is already leaking. If guardrails fail before public launch, the case for an urgent security hygiene review becomes harder to argue against — whatever your risk appetite.
The Guardian: What Would a SaaSpocalypse Mean?
Predating Mythos, this piece anticipated a reckoning for the SaaS model — though on cost, not security grounds. Read alongside this article, it suggests the pressure on enterprise software stacks is coming from multiple directions at once.


