The AI security industry is calling 2025 the new 1990s. The uncomfortable truth is that we predicted every mistake we’re making right now — and made them anyway.

TL;DR: AI security in 2025 is repeating the same structural mistakes that made the early internet a golden age for hackers — not because the industry forgot the lessons of the 1990s, but because the market keeps making it rational to ignore them.


Key Takeaways

  • The 1990s-vs-AI parallel isn’t a memory failure; it’s a repeat of the same incentive structure that rewarded speed and penalized security friction.
  • “Missing basics” — no authentication, no input validation, excessive permissions — are the same fundamentals that made the early commercial internet a golden age for attackers.
  • The 1990s cycle broke when the industry shifted from perimeter defense to runtime visibility. AI security is approaching the same inflection point now.

The jokes that land too well

At Black Hat 2025, Wendy Nather of 1Password compared AI agents to toddlers. “You have to follow them around and make sure they don’t do dumb things,” she said.

I’d compare them to my robot vacuum. They quietly and powerfully help with productivity — until they start destroying the house.

Nils Amiet from Kudelski Security offered his own verdict on the current state of AI security: “If you wanted to know what it was like to hack in the ’90s, now’s your chance.”

And Joseph Carson, chief security evangelist at Segura, perhaps put it most succinctly: deploying AI in your business, he said, is “like getting the mushroom in Super Mario Kart. It makes you go faster, but it doesn’t make you a better driver.”

The audience laughed like they always do — some perhaps thinking about driving on mushrooms. Then we all went back to our organizations while AI continued to expand without the security fundamentals in place.

That’s the part nobody talks about.

An analogy everyone is making — For good reason

The comparison between AI-era security and the early internet has become the industry’s favorite analogy and for good reason. The parallels are genuine and candidly damning. We see companies rushing to establish an AI presence before understanding the attack surface or risk landscape and developers are releasing capabilities with the expectation that security is someone else’s problem. Not to mention, the vendor ecosystem is generating more heat than light, moving fast enough to outrun accountability. Sound familiar? It should. We watched this trainwreck beginning in the 90s, unable to look away or escape the fiery consequences over the better part of a decade. The modern cybersecurity industry today resulted from the chaos that ensued.

And then, when AI arrived and the incentives lined up just right, we did it again.

This isn’t a memory problem

The dominant narrative treats this as a failure of memory – a generational gap where the engineers deploying AI agents today weren’t around to watch the Morris worm, the ILOVEYOU virus, or the early ransomware campaigns teach entire industries their first expensive lessons. That narrative is too generous. The people making deployment decisions in 2024 and 2025 knew exactly what they were doing. The CISOs in the room at Black Hat laughed at the toddler joke because they recognized it. They’re the ones fighting to be included in AI deployment decisions before the keys are handed over, not after. They know the story. They’re watching it happen in real time.

The missing incentive

So if it’s not a failure of memory, what is it?

It’s a structural problem. The incentives that produced the security chaos of the early internet are the same incentives producing the AI security chaos of today. Speed to market is rewarded and security friction is penalized. The costs of a breach are often diffused, spread across customers, shareholders, and downstream third parties, while the benefits of shipping fast are concentrated and immediate. No executive ever got a bonus for the breach that didn’t happen because they slowed down deployment to harden the attack surface. The 1990s didn’t teach us a lesson we forgot, it actually taught us a lesson that the market keeps making rational to ignore.

The scale of the problem is significantly larger this time. In the 1990s, the deployment surface was comparatively narrow: web servers, email clients, early e-commerce. The blast radius of bad security hygiene was real but bounded. Today, AI is being embedded into customer service pipelines, code development environments, financial operations, healthcare workflows, and critical infrastructure simultaneously. And the scary part is when the AI agent gets compromised it doesn’t just leak data, it also takes actions: sending emails, executing code, making API calls, and interacting with interconnected systems. As Nathan Hamiel of Kudelski noted at Black Hat, AI coding assistants are already being compromised to steal encryption keys and access credentials. “When you deploy these tools, you increase your attack surface. You’re creating vulnerabilities where there weren’t any.”

The “missing basics” problem

The researcher community has a name for the specific failure mode repeating itself: missing basics. This is when there is no authentication on exposed AI models, no input validation, and no excessive permissions granted because users expect AI to do everything. These aren’t novel attack surfaces requiring novel solutions. They’re the same mistakes that made the early commercial internet a golden age for hackers, restaged with a larger cast and higher stakes. In fact, the 2025 Verizon Data Breach Investigations Report found that 68% of breaches stemmed from known, fixable fundamentals – unpatched vulnerabilities, overprivileged accounts, and human mistakes.

Missing basics in AI security refers to the absence of fundamental security controls (authentication, input validation, and least-privilege access) that defenders mastered for web applications in the 1990s and are now failing to apply to AI systems.

What broke the cycle last time

What eventually broke the cycle in the 90s wasn’t a single breakthrough, but the accumulating cost of the alternative. The ILOVEYOU worm. Code Red. SQL Slammer. Each one landed hard enough, in enough places, that the argument for slowing down and building security in from the beginning became economically coherent. The industry eventually stopped trying to defend a perimeter it couldn’t see and started trying to understand what was actually happening inside its own networks. Working in behavioral detection, intrusion analysis, and real-time visibility became standard, not because these were philosophically appealing ideas, but because static defenses had already failed too visibly to keep defending.

As a fast paced society we are, depending on your optimism level, somewhere in that middle period. The expensive, visible failures are accumulating. Take a look at the Hugging Face backdoor discoveries, state-sponsored actors using AI to automate 80 to 90 percent of an attack chain and proof-of-concept malware that rewrites its own behavior mid-execution using a live LLM call. The argument for slowing down and building security that actually sees inside these systems – rather than scanning them from the outside and hoping – is becoming harder to dismiss.

The prior inflection point ended with the industry making a fundamental shift in how it thought about visibility. The question for all of us in the current moment is: how many expensive lessons will it take before we make a similar, fundamental shift in how we think about visibility? We obviously can’t roadblock the kind of AI experimentation that will set some organizations apart for the next generation, but we must apply the lessons from past failures to allow cyber teams to enable future success.