Key takeaways:

  • Most CISOs are trapped between two opposing forces right now: the board wants AI everywhere, while security teams are operating with controls that were never designed for autonomous systems, non-deterministic behavior, or AI agents acting across cloud environments.
  • The five challenges below are showing up across almost every enterprise conversation I’m having right now:
    1. visibility gaps across models, agents, identities, and data flows;
    2. prompt attacks and runtime behaviors traditional tools were never built to detect;
    3. no meaningful way to validate whether AI guardrails actually hold under attack;
    4. compliance frameworks that were not designed for Gen-AI systems;
    5. tool sprawl as every team buys its own AI security platform.
  • These are not five separate problems. They are symptoms of the same architectural issue: AI workloads still run on cloud infrastructure, but the industry continues treating AI security like it exists somewhere outside the cloud stack.
  • Runtime context changes everything. The teams that unify cloud security and AI security around runtime truth will move faster, reduce complexity, and defend modern environments more effectively than teams operating through fragmented tooling.

The CISO squeeze in 2026

If you’re a CISO right now, you already know the feeling.

The board wants an AI strategy immediately. Engineering is already shipping copilots, agents, and Gen-AI workflows into production. Teams are connecting models to APIs, internal systems, SaaS platforms, and cloud infrastructure faster than governance can keep up.

Meanwhile security teams are still trying to answer foundational questions:

  • Which models are actually running?
  • Where is sensitive data flowing?
  • Which agents can take actions autonomously?
  • Which non-human identities are fueling those systems?
  • What happens if one of them gets manipulated?

And somewhere in the middle of all this, vendors are telling organizations they need another collection of standalone AI security tools.

I think that’s the wrong direction.

Security has to stay grounded in production reality. And production reality is that AI systems are now part of your cloud runtime.

That changes everything.


Challenge 1: You can’t secure what you can’t actually see

The first problem most teams hit is visibility.

Not dashboard visibility. Real visibility.

Most organizations cannot confidently answer questions like:

  • Which workloads are calling OpenAI, Bedrock, Vertex, Claude, or internal models?
  • Which systems are processing regulated or sensitive data through those models?
  • Which AI workloads are internet exposed?
  • Which MCP servers are connected to which tools?
  • Which autonomous agents have permission to take actions across environments?
  • Which models are running inside infrastructure we own versus SaaS we consume?

Traditional CSPM platforms were never designed for this level of runtime awareness.

Static snapshots made sense in slower-moving environments. AI systems do not behave that way. The environment changes too quickly, the identities are too dynamic, and the behaviors that matter only exist at runtime.

That means AI visibility now requires:

  • Runtime inventory across models, frameworks, and AI services
  • Deep awareness of AI dependencies, wrappers, agents, and frameworks
  • Identity context across non-human identities and autonomous workflows
  • Runtime data awareness tied directly to model behavior

If you cannot enumerate your AI estate at this level, you cannot govern it. And if you cannot govern it, every downstream control becomes weaker.

This is not a tooling problem. It’s an architectural one.


Challenge 2: Prompt attacks broke the assumptions most runtime tools rely on

Security teams spent years getting better at runtime cloud detection.

Container behavior. Lateral movement. Network telemetry. Privilege escalation. Identity abuse.

Then prompt injection arrived and exposed something important: most security tooling still assumes systems behave deterministically.

Models don’t.

A prompt attack is not just a model problem. It’s a cloud attack chain that starts at the model layer.

The chain often looks something like this:

  1. Internet traffic hits a Gen-AI endpoint.
  2. A crafted prompt manipulates the model.
  3. The model issues actions or queries it should not.
  4. Excessive permissions turn model manipulation into actual cloud impact.

Most existing security stacks can see parts of steps 1, 3, and 4.

Step 2 is where visibility collapses.

The model is being treated like trusted logic when in reality it’s something that can be socially engineered in real time.

That is a fundamentally different security problem.

And static AI-SPM scans alone will never solve it because the attack only exists while the system is running.

What teams actually need is runtime AI detection tied directly to:

  • identity context
  • data classification
  • model behavior
  • runtime telemetry
  • exploit paths
  • cloud activity

Not another isolated AI dashboard.

If your AI security tooling lives in a completely separate operational workflow from your cloud runtime security, you’ve probably created a seam attackers will eventually exploit.


Challenge 3: Most teams cannot prove their AI guardrails work

This is where things get uncomfortable.

A lot of organizations have policies. Far fewer have validation.

Teams deploy AI applications with:

  • prompt filtering
  • jailbreak detection
  • output controls
  • allowlists
  • safety layers

Then they assume the problem is solved.

Until somebody finds a path nobody tested.

The issue is not whether you have guardrails. The issue is whether those guardrails survive real adversarial conditions.

And the only meaningful way to answer that is continuous attack validation against live systems.

Not a one-time penetration test.
Not a tabletop.
Not a PDF from procurement.

Continuous validation.

That means:

  • multi-stage AI attack emulation
  • adversarial prompt testing
  • runtime exploit validation
  • cross-cloud attack path analysis
  • validation from model abuse to actual cloud impact

The board ultimately wants a simple answer:

“Are we protected?”

Most organizations answer with policy language.

Very few answer with evidence.

Those are not the same thing.


Challenge 4: Compliance frameworks were not written for AI systems

This challenge gets less attention than it should.

Most compliance frameworks were written for environments where systems behaved predictably and controls mapped cleanly to known infrastructure.

AI breaks that model quickly.

Across enterprise environments right now you’ll find:

  • AI workloads processing sensitive data without proper controls
  • inference endpoints exposed to the internet
  • models connected to over-permissioned identities
  • plaintext secrets attached to AI systems
  • vulnerable AI infrastructure sitting inside production environments

Most organizations are forcing these findings into existing control categories because the frameworks themselves haven’t caught up yet.

That creates compliance debt.

And eventually auditors, regulators, customers, and boards are going to ask much harder questions around:

  • model governance
  • AI runtime exposure
  • data handling
  • agent permissions
  • adversarial resilience
  • autonomous decision making

This is why I believe AI governance is becoming infrastructure.

The old model of governance through static policy documents is not enough when AI systems are chaining actions dynamically across APIs, cloud environments, and SaaS platforms.

You need runtime-aware governance grounded in how systems actually behave.


Challenge 5: AI security is becoming the next fragmented stack

This is the mistake I think the industry is about to repeat.

Right now:

  • AppSec buys an LLM firewall
  • Cloud security buys AI-SPM
  • IAM buys a non-human identity platform
  • Governance teams buy AI risk tooling
  • Data teams buy model lineage products

Each purchase makes sense independently.

Collectively, they recreate the exact fragmentation cloud security spent the last decade trying to fix.

Five consoles.
Five telemetry models.
Five operational workflows.
No shared runtime context.

Security teams already learned this lesson with cloud security tooling fragmentation.

We should not repeat it with AI.

AI workloads are still cloud workloads.

The identities, runtime activity, exploit paths, and data exposure are all interconnected. Treating them as separate operational domains creates blind spots and slows defenders down.

And attackers benefit every time defenders operate through fragmented systems.


So what does good look like in 2026?

Good AI security in 2026 is not another isolated category.

It’s runtime-aware security grounded in production reality.

It looks like:

  • One operational platform instead of disconnected tooling
  • Runtime visibility across cloud and AI environments together
  • Shared telemetry across identities, workloads, models, and data
  • Continuous validation against adversarial behavior
  • Security teams operating from a single source of runtime truth

The organizations that continue treating AI security as separate from cloud security are going to struggle with visibility gaps, operational friction, and escalating complexity.

The organizations that unify them early will have a significant advantage.

Because ultimately the problem is not “AI security” versus “cloud security.”

The problem is defending modern production environments where AI systems now live inside the runtime itself.

And security has to evolve accordingly.

If there’s one thing I’d leave you with, it’s this:

The industry is trying to turn AI security into another fragmented collection of categories and point products. I think that’s the wrong abstraction.

AI workloads run on cloud infrastructure.
The identities are connected.
The runtime behavior is connected.
The attack paths are connected.
The data exposure is connected.

Your security strategy should reflect that reality.

And if you want the full framework, our Field Guide to AI Security in 2026: View, Protect, Validate drops this summer. You can sign up to get it the day it’s published.

Get the Field Guide →


Read more from the launch series