Mythos model sparks warnings over AI cyber threats
Security leaders are warning enterprises about the cyber risks tied to Anthropic's upcoming Mythos model.
Experts say companies must rethink how they defend against large language model threats.
Anthropic's upcoming Mythos model has prompted fresh warnings from security and technology leaders, who argue that the latest generation of large language models will strengthen both cyber defences and criminal attacks.
Executives at security firm Adaptive Security and digital consultancy Valtech say many organisations still rely on processes and controls that do not match the speed and sophistication of modern AI systems.
Brian Long, chief executive of Adaptive Security, said recent advances in generative AI have already reshaped the threat landscape.
"Mythos is exactly the kind of model that should be keeping security teams up at night. Not because it is uniquely dangerous on its own, but because every major frontier model release drives down the cost and skill required to run a sophisticated attack. The tools proliferate, the open-source versions follow, and then anyone can run them on their own hardware with little to no moderation. That cycle is already well underway. In the U.S. alone, we saw over 100,000 deepfake attacks last year, a 17x increase from the year before. When I started talking to CISOs about this 18 months ago, roughly one in ten had seen a deepfake attack succeed at their organization. Today it is more than half," Long said.
Mark Ardito, chief technology officer at Valtech, said the speed and automation of frontier models challenge existing security assumptions.
"Claude Mythos should be a wake-up call for every enterprise. When an AI can uncover vulnerabilities that have remained hidden for decades in mere seconds, it changes the calculus of cybersecurity. We are moving to a speed that is no longer operating at human levels," Ardito said.
The comments reflect growing concern that widely accessible AI models will drive a rise in high-quality phishing, deepfake fraud, and automated exploitation of long-standing software flaws.
Security teams now face adversaries that can generate tailored social-engineering content in many languages, at scale, with realistic audio and video impersonations of executives or trusted colleagues.
Long cited deepfakes as an early example of how quickly AI-enabled threats can spread once tools become mainstream and inexpensive.
His estimate of more than 100,000 deepfake attacks in the United States last year, alongside a 17-fold annual increase, points to a rapid learning curve among attackers.
Adaptive Security's data suggests many employees remain unprepared, especially when attacks arrive through channels that traditional email security systems do not monitor.
"The first thing companies need to do right now is test their people. Not next quarter, but now. Up to 60 percent of employees will currently fail a GenAI-powered attack. That number is not going to improve on its own as models get more capable. Organisations need to simulate what these attacks actually look like across email, SMS, voice calls, and video - the channels that are largely unmanaged today - and find out exactly where their workforce is vulnerable before an attacker does. Security awareness training that takes the form of a once-a-year batch-and-blast video is not going to cut it against a model like Mythos," Long said.
Ardito described the challenge as a widening gap between the speed of offensive AI and the pace at which many enterprises adapt their defences.
Valtech's research suggests many senior leaders feel "tech anxiety" when weighing AI adoption against security exposure.
AI-Augmented Defence
Ardito said manual processes cannot keep pace with autonomous AI systems that can chain exploits or probe thousands of potential weaknesses in parallel.
"Our 2024 research at Valtech highlighted that tech anxiety is a major hurdle for leaders. Much of that anxiety stems from the gap between the speed of innovation and the speed of security. To bridge that gap and prepare for the public release of frontier models like Mythos, companies should prioritize three areas.
"First, pivot to AI-augmented defence. You cannot fight this new level of speed in attacks with manual processes. If an autonomous model can chain exploits together faster than a human can read a ticket, your defensive strategy must also be agentic. Companies need to start integrating these same frontier models into their own red-teaming and patch-management workflows. The goal is to find your own zero-days before an external agent does.
"Second, radical legacy hygiene. Just because old systems appear stable does not mean they are safe. Mythos was able to find vulnerabilities in code that is 27 years old and had long been treated as stable. Organisations must conduct a deep, AI-driven audit of their entire software supply chain. They should assume every piece of unmanaged legacy code is now a visible target and prioritize modernizing or isolating those assets immediately.
"Third, design for resilience over perimeter. Traditional security thinking defaults to a moat-and-castle approach. When vulnerabilities can be found and exploited at scale, trust must be earned at every layer of the digital experience. Assume a breach will occur and ensure your architecture can contain the impact and recover autonomously without bringing down the entire customer experience," Ardito said.
The emphasis on legacy software reflects concern that advanced models will revisit old codebases at machine speed and surface flaws that manual audits or traditional scanners missed for years.
Many large organisations still run decades-old systems that connect to modern digital channels and hold sensitive customer or financial data.
Human Factors And Controls
Alongside technical measures, Long said governance around everyday decisions remains a weak point, especially for financial authorisations and urgent requests that exploit pressure and hierarchy.
"The second thing is controls. Most organizations do not have a passcode system for verifying high-stakes requests. They do not have a clear protocol for what an employee should do when they receive an urgent call from someone who sounds exactly like their CFO asking for a wire transfer. Those controls need to exist before the next generation of models makes the attacks indistinguishable from the real thing, and we are very close to that point. The companies that will weather what is coming are the ones that treat this with urgency today, because the attackers are moving fast and they are not waiting for anyone to catch up," Long said.
"Mythos is exactly the kind of model that should be keeping security teams up at night. Not because it is uniquely dangerous on its own, but because every major frontier model release drives down the cost and skill required to run a sophisticated attack. The tools proliferate, the open-source versions follow, and then anyone can run them on their own hardware with little to no moderation. That cycle is already well underway. In the U.S. alone, we saw over 100,000 deepfake attacks last year, a 17x increase from the year before. When I started talking to CISOs about this 18 months ago, roughly one in ten had seen a deepfake attack succeed at their organization. Today it is more than half," Long said.