Upwind raises $250M Series B to secure the cloud for the world →
Get a Demo

The majority of Kubernetes containers live for less than 5 minutes.

That’s a security challenge, with an attack surface that’s dynamic, ephemeral, and increasingly abstracted behind layers of orchestration. Traditional perimeter tools aren’t built for that, nor are the stale security strategies that created them. 

So how do teams monitor workloads that don’t persist long enough to be logged? How do they secure “infrastructure” when that infrastructure is now declarative YAML spun up on demand by a developer pipeline?

Container orchestration platforms have transformed application delivery but broken many of our old assumptions — and tools.

We’ve looked into container security, including container security tools and the specifics of safeguarding AWS and runtimes. But what do you need to know about how security’s being redefined for an environment where workloads are short-lived, decentralized, and constantly in flux? 

Introduction to Container Orchestration

Container orchestration is the automated management of containerized applications, including deployment, scaling, networking, and lifecycle, typically using platforms like Kubernetes.

In high-scale environments, orchestrators manage thousands of ephemeral containers across distributed infrastructure. That includes: 

But with great efficiency comes risk: in the 2019 Capital One breach, attackers exploited a misconfigured Web Application Firewall (WAF) running in a containerized environment, moving laterally through orchestrated workloads. Without controls embedded in the orchestrator itself, like strict network policies, workload identity enforcement, and runtime visibility, the infrastructure offered little resistance.

Container orchestration wasn’t responsible for breaches like Capital One’s, but it illustrates that these platforms can help accelerate deployment and scale, but they can also accelerate exposure, and they require security designed for them.

Runtime and Container Scanning with Upwind

Upwind offers runtime-powered container scanning features so you get real-time threat detection, contextualized analysis, remediation, and root cause analysis that’s 10X faster than traditional methods.

Get a Demo

Why Enterprise Security Models Still Struggle with Orchestration

Even as enterprises embrace cloud-native architectures, zero-trust, and shift-left, their actual tooling and processes often lag behind, especially when it comes to orchestrated environments.

That’s because container orchestration changes assumptions about:

Enterprises aren’t relying on a perimeter anymore. But their security controls still expect stability, long-lived infrastructure, and clear ownership boundaries. Orchestration breaks all of that. 

Take asset visibility. In a traditional environment, teams could inventory workloads daily with a single scan. But in an orchestrated and containerized environment, a container might spin up, serve traffic, and terminate in under a minute. 

Asset visibility is a challenge in an ephemeral environment, but automated discovery and continuous monitoring at runtime ensure that no matter how short-lived, containers get on the radar, along with all the context that comes with them.
Asset visibility is a challenge in an ephemeral environment, but automated discovery and continuous monitoring at runtime ensure that no matter how short-lived, containers get on the radar, along with all the context that comes with them.

And consider identity. A developer might deploy a pod using a service account with broad permissions; yet, that identity is governed by Kubernetes RBAC, not a centralized IAM platform. Without visibility into how those permissions intersect, attackers abusing a service account inside the cluster can move laterally without tripping any alerts.

A unified identity graph ties together cloud IAM, Kubernetes RBAC, and actual runtime behavior, for visibility across cloud and orchestration layers that can be lost with traditional tools.
A unified identity graph ties together cloud IAM, Kubernetes RBAC, and actual runtime behavior, for visibility across cloud and orchestration layers that can be lost with traditional tools.

Unless tooling accounts for these patterns, the security model itself is still built for a world that no longer exists. Typical tools and workflows that reinforce outdated assumptions include:

Screenshot-2025-06-02-at-10.51.52 AM-1024x519
To protect modern containers, correlate runtime behaviors to CVEs, identities, and whether containers are exposed to the internet.

What Modern Security Must Do Differently in Orchestrated Environments

To secure orchestrated environments, security leaders started out by retrofitting tools to adapt to a fast-changing digital ecosystem. That security isn’t comprehensive, nor is it scalable. However, containerization is here to stay.

The global container market of $4.82 billion is expected to balloon to over $16 billion by 2033. And Kubernetes dominates. Over 50% of enterprises use the platform as their primary orchestrator.

Orchestration has made containerization sustainable, and now enterprises run thousands of services, often each in its own container. Orchestration automates tasks that would be unmanageable by hand, speeding up deployment and abstracting infrastructure from applications. Since it’s the execution layer for most enterprise apps, orchestrators are a key part of securing the platforms that run today’s businesses, and that has to include all its layers: identity, workload behavior, admission, and networking). 

Here are some core capability shifts for orchestrated environments.

Security CapabilityWhy it’s NeededWhat it Requires
Real-time workload discoveryContainers won’t spin up/down too fast for traditional scanningOrchestrator-level telemetry, runtime sensors (eBPF), and automated asset labeling
Workload identity awarenessIAM roles alone don’t explain pod-level access or service account sprawlCorrelating cloud IAM, service accounts, RBAC, runtime behavior into a single identity path
Policy-as-code enforcementManual gates break in CI/CD pipelines and autoscaling environmentsAdmission controllers, OPA/Gatekeeper, and guardrails integrated into CI/CD flows
Runtime behavior baseliningStatic scanning misses drift, exploit paths, and lateral movementProcess-level monitoring, anomaly detection, and time response tied to orchestrator context
Forensics without persistenceShort-lived workloads disappear before incidents are investigatedOn-the-fly evidence capture, contextual snapshots, and workload lineage business context
Decentralized risk prioritizationSecurity teams can’t chase every alert in fast-moving infrastructureRisk scoring that incorporates workload exposure, privileges, and business context
Integrated response orchestrationTickets don’t move fast enough for alerts to trigger action in timeAuto-remediation logic, orchestrator-native playbooks, and conditional response triggers

The first priority is workload discovery. Without visibility, other controls aren’t adequate. In orchestrated environments, containers can spin up and terminate without ever touching CMDB or EDR tools. Real-time visibility, using orchestrator-native metadata and ideally, runtime telemetry like eBPF sensors offer a live map of what’s running, where, and under what context.

Next, adopt runtime behavior baselining. Traditional image scanning could only tell teams what might go wrong, but runtime analysis now answers, “What is going wrong right now?” That’s key in short-lived containers. But it also lets teams find drift, identify suspicious privilege use, and see unexpected process execution. And it works in containers that didn’t exist 5 minutes before. In parallel, implement policy-as-code enforcement to shift control closer to the developer workflow.

With insight and guardrails, focus on integrated response integration. In orchestrated environments, alerts must trigger real-time containment. That should include automated quarantining of non-production workloads, scaling down exposed services, or tagging suspicious pods for live investigation. Tools that integrate with orchestrators and cloud APIs can enforce this logic dynamically.

Finally, implement decentralized risk prioritization. At this stage, teams might find they’re buried in telemetry. They’ll need tooling that combines identities, exposures, privileges, and runtime behavior with business context, eliminating duplicate alerts and correlating signals to provide high-fidelity alerts that require attention. 

These steps suggest tooling changes, but they aren’t about tools. Ultimately, they’re about a strategic progression of capabilities that align security with how an orchestrated infrastructure actually behaves.

Getting Started: Comparing Orchestration Platforms (with Pros and Cons)

While Kubernetes dominates the orchestration conversation today, it’s not the only option, and it wasn’t the first. Container orchestration platforms vary in complexity and flexibility, but all aim to deploy, manage, and scale containers. In that sense, the overarching security strategy behind securing them and the containers they orchestrate remains largely unchanged between platforms. However, each does come with different security implications and features. So let’s compare popular options.

Kubernetes: The Industry Standard

The open-source, extensible platform, Kubernetes, is generally deployed either in a company-owned bare-metal server or outsourced to a cloud provider in a model called Kubernetes as a Service, or KaaS.

There are a few common benefits to using Kubernetes, including. 

Kubernetes is not without its issues, however, which can include: 

Docker Swarm: Simpler but Limited

Docker Swarm is a less powerful orchestration tool than Kubernetes. What it lacks in scope can be offset by its direct integration into Docker. With many organizations building their containerized applications in Docker, leveraging a built-in orchestration tool can be a no-brainer DevOps teams. 

The benefits of using Docker Swarm over another solution include:

Some negatives of using Docker Swarm to be aware of include: 

Nomad: Lightweight and Flexible

While Kubernetes and Docker dominate enterprise orchestration, other choices may work best for teams seeking alternative solutions, such as the simplicity at scale offered by Nomad.

Described as easy to run and maintain, Nomad is designed to natively handle multi-datacenter and multi-region deployments with high scalability. It is cloud-agnostic and is commonly considered a less complex alternative to Kubernetes, since it avoids the operational overhead and complicated learning requirement, but keeps some of the flexibility teams like in Kubernetes. It supports a variety of workload types, not just containers, so Nomad can be appealing for mixed-infrastructure environments.

Some of the benefits of using Nomad are: 

Some of the potential negatives with Nomad are: 

FeatureKubernetesDocker SwarmNomad
ScalabilityExcellentGoodVery Good
Learning CurveSteepEasyModerate
EcosystemVastLimitedGrowing

Securing Orchestration is a Mindset Shift, Regardless of Orchestrator

As enterprises scale, each orchestration platform handles key concerns, from identity to scheduling and workload isolation, differently. 

Kubernetes exposes granular RBAC, supports namespaced multi-tenancy, and offers native constructs like Network Policies and admission controllers. But it also requires significant configuration and observability tooling to tie those controls to real risks. 

Nomad, by contrast, is simpler and lighter, but lacks native network segmentation and deep access control. It relies on companion tools for some basic functionalities that Kubernetes handles natively. 

Docker Swarm goes even farther toward simplicity, with minimal access control, weak tenancy boundaries, and a small ecosystem. Even ECS, while deeply integrated into AWS IAM, doesn’t natively support workload-level runtime visibility or pod-specific policy enforcement.

These differences matter because a security model must align with the orchestrator’s capabilities and gaps. So, policies that work in Kubernetes, like requiring all workloads to run with specific service accounts and enforcing image provenance through admission controls, have no equivalent in Docker Swarm. Likewise, in Nomad, in a multi-datacenter environment, teams will need to architect service discovery and secrets management externally.

Remember, platform differences are only part of the challenge. There’s a strategic shift to make, too: in orchestrated environments, workloads are no longer static assets to be scanned or firewalled. They are ephemeral, identity-bound processes. They’ll need security built into how they’re scheduled, deployed, and observed.

 It means balancing shift-left with shift-right approaches and incorporating continuous, identity-aware visibility. And it means adopting tools that understand and participate in orchestration.

Upwind Rethinks Security at the Speed of Orchestration

Orchestration represents a transformation in how organizations behave — detection and enforcement have to happen in real time. Upwind is built for the shift. It offers continuous, real-time visibility into orchestrated workloads across Kubernetes, containers, cloud services, and identities. By correlating runtime behavior with orchestrator context and cloud IAM, Upwind knows what’s running and who’s running it, along with whether it’s behaving as expected. Those controls aren’t add-ons after deployment, either. They’re built-in controls that leverage deep understanding to inform automation and workload-aware responses.

To see how you can secure orchestration with observability, control, and identity correlation, schedule a demo.

FAQs

What about services like AWS Fargate? 

Yes, AWS Fargate, Azure Container Instances, and Google Cloud Run provide orchestration, but abstract most of it away. Teams don’t manage the scheduler directly, but orchestration still happens under the hood to deploy, scale, and run containers. Here’s how Fargate works:

How do container orchestration platforms handle secrets and credentials?

Most orchestration platforms offer a way to inject secrets into workloads, but the level of security and integration varies. Kubernetes has native support, while others rely more heavily on external tools. Here are some key points to remember:

Can I use orchestration without Docker?

Yes, you can use container orchestration without Docker. While Docker is a popular containerization platform and often used with orchestration tools like Kubernetes, other container runtimes and orchestration platforms exist. You don’t need Docker to orchestrate applications. 

Kubernetes uses the Container Runtime Interface (CRI) and supports runtimes like containerd and CRI-O. Docker itself is being phased out as a default runtime in Kubernetes. And containerd, once a Docker component, is now widely used independently in production. 

Modern orchestration platforms are runtime-agnostic, and Docker is just one of several compatible options.

What are the most common container orchestration security vulnerabilities?

Orchestrators expand the attack surface with new components, identities, and configurations. Most vulnerabilities stem from misconfigurations, over-permissive access, and a lack of runtime controls, including:

What role does container orchestration play in CI/CD pipelines?

Container orchestration plays a crucial role in CI/CD pipelines by automating the deployment, management, and scaling of containerized applications, enabling faster, more reliable, and scalable software delivery. It essentially acts as the “maestro” for containerized applications, ensuring they are deployed and managed consistently across different environments.