Key takeaways:

  • CNAPP gave you visibility into cloud risk. AI Security has to give you control over AI risk. We give you both.
  • Today we’re shipping the Upwind AI Security Platform (View, Protect, Validate), built right into the same runtime fabric as our cloud security platform.
  • The AI Agentic Pack is generally available. Blue investigates, Green remediates, Red proves what’s exploitable, and Choppy AI runs the show as the conversational control plane.
  • One Platform. One SKU. Frictionless. Code → Pipeline → Cloud → Runtime, including every AI workload running across them.

Cloud security and AI security stop being two things

For the last couple of years, a lot of the security industry has been quietly insisting that AI security has to be its own thing. Keep tools, teams, budgets, and dashboards separate or else…

Well, we disagree.

We listened to our customers, and what they’re describing is one environment, not two. The model that’s processing customer data lives in the same VPC as the database. The agent making API calls is using the same non-human identity as your batch jobs. The prompt injection that lands tonight is going to travel through the same network paths your CSPM is already watching for misconfiguration. That’s one tangled environment if we’ve ever seen one.

So here’s how we think about it: CNAPP gave the industry visibility into cloud risk. AI Security has to give the industry control over AI risk. We give you both. One platform, one SKU, one runtime fabric underneath all of it.

That’s what’s shipping today.

👉 Want to go deeper? Read The 5 Hidden Challenges of Securing Enterprise AI in 2026 →


So why now?

Because the AI shift breaks single-purpose tools. Plain and simple.

A typical cloud workload in 2024 was a service, a database, and a network path between them. A typical cloud workload today is a service that calls a model that calls an agent that calls another model that touches sensitive data through a non-human identity that nobody approved last quarter. Static configuration scanning was never going to catch that.

We’ve been watching this play out across our customers, and three patterns keep showing up:

  1. The shadow AI estate. Engineers are spinning up models, frameworks, and agents faster than security teams can inventory them. By the time a CSPM picks up a configuration drift, the wrapper is already in production and the AI-BOM is already incomplete.
  2. The multi-step attack chain. Internet ingress → LLM prompt injection → SQL injection → excessive agency. Each step looks innocent in a static scan. Put them together and you’ve got the most critical exploit path in your AI estate, and only runtime sees the chain assemble.
  3. The silent data bleed. A workload starts sending payloads to an unsanctioned model endpoint. PII, secrets, customer data, all flowing out through an egress path no DLP rule was written to expect.

None of these are model bugs. They’re cloud kill chains that happen to use AI as one of the legs. And if you try to solve them with a separate AI-only point tool, you’ve just bought yourself another console to log into, another data fabric to feed, another budget line to defend, and (the part that really gets me) another place where the picture stays incomplete.

The answer isn’t a second platform. It’s one platform that already understands the cloud, extended natively to understand AI.

👉 Want to go deeper? Read The AI Visibility Gap: Why You Can’t Secure What You Can’t See →


What’s actually happening

Three things are happening on the Upwind AI Security Platform.

1. View, Protect, Validate

We organized AI Security around the three gaps every modern security team is staring into right now:

  • View: Real-time inventory of every model, agent, framework, and non-human identity in your cloud. AI-Inventory, AI-BOM, AI-NHI, and Discovery, all powered by the runtime fabric. No agentless snapshots. No static crawls. Behavioral truth.
  • Protect: Runtime detection and response for AI workloads. AI-DR, AI-Sensor, AI-SPM, and AI-Data Classification, watching for prompt injection, exfiltration, excessive agency, and unsanctioned egress as they happen. Not after the fact.
  • Validate: Offensive testing built for reasoning paths, not just code paths. AI Exploit, Exfiltration, Resilience & Model Abuse, Attack Validation, and Vulnerability Validation. Reachability beats theoretical risk every time.

2. The AI Agentic Pack 

Upwind announces the AI Agentic Pack, with four agents now embedded across the platform:

  • Blue Agent investigates and responds to security incidents.
  • Green Agent drives remediation of prioritized issues.
  • Red Agent identifies the most critical and exploitable attack paths.
  • Choppy AI is the context-aware coordinator: a conversational control plane that orchestrates the right agent for each task across investigation, remediation, validation, and architecture, with full access to your platform context.

We took our time on this. The agents aren’t bolted on, and we didn’t want your experience to feel like an afterthought. They’re on the same runtime telemetry that powers View, Protect, and Validate. When Choppy hands work to Red, Red is reasoning over real attack paths in your environment. When Green executes a fix, Blue already knows whether the underlying behavior has stopped.

That’s what we mean by agentic security powered by runtime context. Not a chatbot. A team.

3. Full Platform consolidation: Code → Pipeline → Cloud → Runtime

This is the biggest change for buyers.

AI Security isn’t a SKU bolted onto our platform. It’s part of the same platform you already use for CSPM, CIEM, ASPM, CDR, container security, API security, and DSPM. Same runtime fabric. Same data model. Same console. Same set of policies.

If you’re already an Upwind customer, you don’t have to procure anything new to start using AI Security. If you’re consolidating onto Upwind, the AI capabilities just come along for the ride.


One Platform. One SKU. Frictionless.

Three words I want every CISO reading this to take seriously, because we mean every one of them.

One Platform. Cloud and AI security under one runtime fabric. One agent footprint. One identity model. One source of behavioral truth.

One SKU. No separate AI Security line item. No procurement gymnastics to add capabilities you’re going to need anyway by next quarter. The full platform is the full platform.

Frictionless. From Code to Cloud in real time. Deploy once, see everything, including the AI estate that didn’t exist on your network diagram six months ago.

This is also what’s underneath our partnership with AWS, who recently named Upwind their CNAPP of Choice. The reason that designation matters is because AWS’s customers are the ones building the most aggressive AI estates in the world right now, and the platform handling their cloud security has to be able to handle their AI security in the same breath. Ours does.

👉 Want to go deeper? Read the platform overview on Upwind AI Security →.


What this means for the people doing the work

Honestly? We didn’t build this for the analysts. We built it for three groups of people who are already feeling the AI shift in their day-to-day.

For CISOs. You don’t have to choose between funding cloud security maturity and funding an AI security program. You don’t have to defend two tool stacks to your board. You can answer the four questions every CISO is now being asked (What AI is running in my cloud? Which workloads can reach sensitive data? Which AI risks are real? If a prompt injection lands tonight, will I know?) from one console, with one set of evidence.

For practitioners. Your existing instincts transfer. The AI estate isn’t some foreign object. It’s another set of workloads running on the same cloud you already secure. AI-Inventory looks like asset inventory. AI-DR looks like detection and response. AI Exploit looks like offensive validation. The terminology is new, but the muscle memory is all the same.

For SOCs. Detections you can act on tonight, not theoretical scoring. Behavioral indicators that fire when an agent calls an unsanctioned model. Attack-path validation that tells you whether a finding is exploitable before you triage it. Less queue. More signal.


One last thing

None of this could’ve happened without our customers.

The Upwind story has been one long conversation with the people doing this work. They told us we needed runtime context, so we built the sensor. They told us posture without runtime was hollow, so we connected them. They told us application security and data security were drifting away from the rest of the cloud, so we pulled them in. And over the last year, they’ve been telling us the same thing about AI: this is happening to us right now, and our existing tools weren’t built for it.

So we built this.

Nobody fully knows where AI security goes from here. The threat models will evolve. The agents will get more capable. The estate will get stranger. What I can tell you is that whatever shape this takes, it’ll take that shape because of what our customers tell us next. That’s how we got here and that’s how we’ll keep going.

To the customers who’ve been on this journey with us: thank you. The next chapter is yours as much as ours.


The Field Guide drops summer 2026

This piece is the opening of the Upwind AI Security launch series. Over the next eight weeks, our CISO, threat research, and field teams will publish a deep dive on each of the three gaps (View, Protect, Validate), culminating in AI Security in 2026: A Field Guide to View, Protect, Validate, our complete reference for the discipline.

Get the Field Guide →


Read more from the launch series