For the first time in 30 years, cybersecurity defenders might actually be winning
Everyone is catastrophizing about AI-powered attacks. Here’s the contrarian case, and why the window is narrower than it looks.
TL;DR: The prevailing narrative at Black Hat 2025 was that AI has made attackers unstoppable. The most credentialed voice in the room said the opposite, and the data backs him up. The Mythos release through Project Glasswing is the clearest live example of that defender advantage in action. AISLE’s replication work is the clearest example of why the window is narrower than it looks. The defender’s advantage is real only if security leaders act before the hype cycle commoditizes it.
The claim nobody wants to make
The cybersecurity industry has a complicated relationship with good news. When Mikko Hypponen, one of the most respected threat researchers alive and someone who has been tracking malware since the early 1990s, stood on the Black Hat stage and said that AI is “one of the few fields where defenders are ahead of the attackers,” the line landed with an almost audible cognitive dissonance. After two days of sessions cataloging every way AI was making attackers faster and cheaper, the security community had conceded and begun doom spiraling.
And then Hypponen, the person in the room with the longest memory and the most data, said the quiet part out loud.
Defenders are ahead.
My gut reaction was to agree. It’s something I’ve been saying for years: AI can and should help cybersecurity professionals more than those looking to attack us.
Why the asymmetry may be bending
To be clear, this is not a small claim. Spend any time in this industry and you internalize, almost as a physical law, the asymmetry of the defender’s position: attackers only need to succeed once while defenders need to succeed every time. That asymmetry has defined the game for thirty years. It’s why the 1990s were a golden age for hackers even against relatively unsophisticated targets. It’s why nation-state actors with effectively unlimited resources still managed to own networks that cost billions to defend. The attacker’s initiative advantage is structural, not incidental, and it doesn’t go away because defenders work harder.
Except, apparently, right now. Oops.
Symantec’s threat researchers have made the argument in more technical terms: defenders currently hold the advantage with AI-driven behavioral analytics and predictive models that attackers haven’t matched. Security teams that have been doing machine learning for behavioral detection for a decade, companies like Symantec and Carbon Black with nearly thirty years of combined AI/ML work, have a maturity advantage that attack tooling hasn’t closed yet. The “zero-day apocalypse” that many expected when AI became accessible to threat actors hasn’t materialized. The attacks are faster and cheaper, yes. But the defenses, for now, are still faster.
The defender’s AI advantage refers to the current period in which enterprise security teams, leveraging mature behavioral analytics and predictive detection models built over a decade, are outpacing threat actors whose AI-assisted attack tooling remains newer and less operationally sophisticated.
It’s worth sitting with how historically unusual this is. For most of the internet’s security history, the pattern has run in one direction:
Step 1. Attackers find a new technique.
Step 2. Attackers use it extensively while defenders catch up.
Step 3. Defenders adapt.
Step 4. Attackers find the next technique.
Rinse and repeat. The defender’s job has always been to close gaps, not to anticipate them. The idea that behavioral AI could genuinely put defenders in a proactive position, that the tools of defense could for a period outpace the tools of offense, represents a structural break from three decades of precedent. And the clearest live example of that break has a name.
What Mythos actually proves
When Anthropic released a frontier model capable of autonomous vulnerability discovery and exploit development, they didn’t drop it publicly. They released it through Project Glasswing, a restricted-access program for vetted defenders. Read structurally rather than as a product launch, this is something the security industry has not seen before in three decades of internet security. A capability that would have been catastrophic in attacker hands was made available to the defenders first.
That alone is worth sitting with. The history of dual-use technology in this industry is that offense gets the new toy first and defense scrambles to catch up from below. Glasswing inverted the order. For a brief and historically unusual window, defenders had access to a class of capability that attackers did not.
The catch arrived almost immediately. Independent researchers at AISLE showed that some of Mythos’s headline zero-days, including the FreeBSD vulnerability Anthropic led with, could be reproduced using much smaller open-weight models. Glasswing bought defenders a window, but it did not close the door behind them.
That is the whole story of the defender’s AI advantage in miniature. A real structural shift, paired with a real expiration date. Which is exactly Hypponen’s second point.
The escalation pattern to watch
Hypponen, true to form, didn’t leave the optimism unqualified. He noted that when systems are hardened, attackers shift to targeting people instead. They phish users, exploit weak endpoints, and turn to social engineering rather than brute force. The AI threat is following a recognizable escalation pattern: state-sponsored actors are using it to automate reconnaissance, code development, and exploitation chain construction. Google’s threat intelligence team has tracked APT41 using Gemini for C++ and Golang code development; North Korean units using it for social engineering lures across multiple languages; and perhaps most ominously, the first detected malware capable of using an AI API mid-execution to dynamically rewrite its own obfuscation in real time.
The zero-day apocalypse hasn’t arrived. But the ingredients, as one researcher put it, are all there.
What the 90s tell us about what comes next
The 90s analogy offers a useful template for what comes next, if the industry gets this wrong. By the early 2000s, defenders had closed the gap enough to make the original wave of internet exploits less reliable. What followed wasn’t a reduction in threats, but a professionalization of the attack side. They monetized, coordinated, and brought in nation-state involvement. The threat didn’t retreat when defenses improved; it evolved into something harder to address. The current defender advantage, if it exists and if it holds, needs to be used to build durable capabilities rather than checked as a box on an annual security review.
The hype cycle risk
The risk is that we haven’t learned from the past and we’ll let our advantage slip. The AI security market right now is generating enormous hype and significant disillusionment in roughly equal measure.
Walking the RSA 2026 show floor tells you everything you need to know about where we are in the cycle. Did you see the dune buggies wrapped in AI banners? Or the airport walls covered in claims about autonomous threat detection? Every vendor announcing the AI SOC?
Yet Cisco’s own survey, released at the conference, found that while 85% of enterprises are experimenting with AI agents, only 5% have moved them into production. The pitch is everywhere, but the reality is almost nowhere.
Forrester called the autonomous AI-powered SOC a “pipedream,” and by late 2024 their analysts were predicting security leaders would scale back generative AI investments as productivity gains fell short of what vendors had promised. Vendors are racing to attach AI labels to products that haven’t materially changed, and buyers are making purchasing decisions based on marketing rather than outcomes.
This is, again, recognizable. The antivirus era of the 1990s followed the same arc: a real capability, correctly identified, that was then oversold as security theater so quickly that it stopped protecting against anything genuinely new.
The difference this time is the velocity. The gap between “real capability” and “commoditized checkbox” is compressing from years to months.
The window in which defenders hold a meaningful AI advantage over attackers is real, but it’s also closing, and it’s closing fast.
What it takes to hold the advantage
Listen to me closely. The defenders who will extend that window and stay ahead aren’t the ones buying the most AI-labeled tools. They’re the ones betting on visibility into what their systems are actually doing, in real time, rather than what those systems look like in a quarterly scan or a static configuration audit. The attacker using AI today isn’t waiting for your next penetration test, and the defense that will hold isn’t either.
This is the deeper lesson of the Mythos release, and the same lesson the 90s already wrote down about every capability that once looked like a permanent advantage: Glasswing alone is not the durable answer. AI defense is fundamentally a context problem, solved at runtime, not a model problem. Remember, Glasswing only gave defenders a head start on access to a specific capability. It did not give them a permanent advantage, because permanent advantage in AI security does not come from access to any particular model, however capable. It comes from knowing what is actually running in your environment, what is normal, and what is happening right now.
Models change. Context is what defends.


