
The majority of Kubernetes containers live for less than 5 minutes.
That’s a security challenge, with an attack surface that’s dynamic, ephemeral, and increasingly abstracted behind layers of orchestration. Traditional perimeter tools aren’t built for that, nor are the stale security strategies that created them.
So how do teams monitor workloads that don’t persist long enough to be logged? How do they secure “infrastructure” when that infrastructure is now declarative YAML spun up on demand by a developer pipeline?
Container orchestration platforms have transformed application delivery but broken many of our old assumptions — and tools.
We’ve looked into container security, including container security tools and the specifics of safeguarding AWS and runtimes. But what do you need to know about how security’s being redefined for an environment where workloads are short-lived, decentralized, and constantly in flux?
Introduction to Container Orchestration
Container orchestration is the automated management of containerized applications, including deployment, scaling, networking, and lifecycle, typically using platforms like Kubernetes.
In high-scale environments, orchestrators manage thousands of ephemeral containers across distributed infrastructure. That includes:
- Managing deployment
- Scheduling resources
- Handling fault tolerance and resilience
- Service discovery and networking
- Access control and configuration management
- Horizontal and vertical scaling
- Logging, monitoring, and health checks
- Multi-tenancy and isolation
- Declarative infrastructure
But with great efficiency comes risk: in the 2019 Capital One breach, attackers exploited a misconfigured Web Application Firewall (WAF) running in a containerized environment, moving laterally through orchestrated workloads. Without controls embedded in the orchestrator itself, like strict network policies, workload identity enforcement, and runtime visibility, the infrastructure offered little resistance.
Container orchestration wasn’t responsible for breaches like Capital One’s, but it illustrates that these platforms can help accelerate deployment and scale, but they can also accelerate exposure, and they require security designed for them.
Runtime and Container Scanning with Upwind
Upwind offers runtime-powered container scanning features so you get real-time threat detection, contextualized analysis, remediation, and root cause analysis that’s 10X faster than traditional methods.
Why Enterprise Security Models Still Struggle with Orchestration
Even as enterprises embrace cloud-native architectures, zero-trust, and shift-left, their actual tooling and processes often lag behind, especially when it comes to orchestrated environments.
That’s because container orchestration changes assumptions about:
- Asset visibility: What is workload when it spins up and disappears in seconds?
- Identity and access: Who’s allowed to do what, across both cloud IAM and orchestrator Role-Based Access Control (RBAC)?
- Monitoring scope: Where do you instrument observability in ephemeral, layered systems?
- Response logic: Can your playbooks respond at the speed of orchestration?
Enterprises aren’t relying on a perimeter anymore. But their security controls still expect stability, long-lived infrastructure, and clear ownership boundaries. Orchestration breaks all of that.
Take asset visibility. In a traditional environment, teams could inventory workloads daily with a single scan. But in an orchestrated and containerized environment, a container might spin up, serve traffic, and terminate in under a minute.

And consider identity. A developer might deploy a pod using a service account with broad permissions; yet, that identity is governed by Kubernetes RBAC, not a centralized IAM platform. Without visibility into how those permissions intersect, attackers abusing a service account inside the cluster can move laterally without tripping any alerts.

Unless tooling accounts for these patterns, the security model itself is still built for a world that no longer exists. Typical tools and workflows that reinforce outdated assumptions include:
- Periodic asset scanners: Containers come and go before the next cycle.
- Traditional SIEMs: Logs and agent-based telemetry from known endpoints can fail to see orchestrated workloads with their short lifespans.
- Manual IAM reviews and role audits: Identity in orchestrated environments spans layers, like cloud IAM, service accounts, and Kubernetes RBAC.
- Static image scanning (shift-left-only): Static scans can’t catch runtime behavior, privilege escalation, and drift after deployment.
- CMDB-Driven Risk Classification: CMDBs can’t keep up with dynamic infrastructure and often misrepresent what’s really running.
- Approval-based deployments: in which devs can hotfix or autoscale around pre-deployment reviews.
- Ticket-driven incident response: That can’t account for short-lived threats in short-lived workloads.

What Modern Security Must Do Differently in Orchestrated Environments
To secure orchestrated environments, security leaders started out by retrofitting tools to adapt to a fast-changing digital ecosystem. That security isn’t comprehensive, nor is it scalable. However, containerization is here to stay.
The global container market of $4.82 billion is expected to balloon to over $16 billion by 2033. And Kubernetes dominates. Over 50% of enterprises use the platform as their primary orchestrator.
Orchestration has made containerization sustainable, and now enterprises run thousands of services, often each in its own container. Orchestration automates tasks that would be unmanageable by hand, speeding up deployment and abstracting infrastructure from applications. Since it’s the execution layer for most enterprise apps, orchestrators are a key part of securing the platforms that run today’s businesses, and that has to include all its layers: identity, workload behavior, admission, and networking).
Here are some core capability shifts for orchestrated environments.
Security Capability | Why it’s Needed | What it Requires |
Real-time workload discovery | Containers won’t spin up/down too fast for traditional scanning | Orchestrator-level telemetry, runtime sensors (eBPF), and automated asset labeling |
Workload identity awareness | IAM roles alone don’t explain pod-level access or service account sprawl | Correlating cloud IAM, service accounts, RBAC, runtime behavior into a single identity path |
Policy-as-code enforcement | Manual gates break in CI/CD pipelines and autoscaling environments | Admission controllers, OPA/Gatekeeper, and guardrails integrated into CI/CD flows |
Runtime behavior baselining | Static scanning misses drift, exploit paths, and lateral movement | Process-level monitoring, anomaly detection, and time response tied to orchestrator context |
Forensics without persistence | Short-lived workloads disappear before incidents are investigated | On-the-fly evidence capture, contextual snapshots, and workload lineage business context |
Decentralized risk prioritization | Security teams can’t chase every alert in fast-moving infrastructure | Risk scoring that incorporates workload exposure, privileges, and business context |
Integrated response orchestration | Tickets don’t move fast enough for alerts to trigger action in time | Auto-remediation logic, orchestrator-native playbooks, and conditional response triggers |
The first priority is workload discovery. Without visibility, other controls aren’t adequate. In orchestrated environments, containers can spin up and terminate without ever touching CMDB or EDR tools. Real-time visibility, using orchestrator-native metadata and ideally, runtime telemetry like eBPF sensors offer a live map of what’s running, where, and under what context.
Next, adopt runtime behavior baselining. Traditional image scanning could only tell teams what might go wrong, but runtime analysis now answers, “What is going wrong right now?” That’s key in short-lived containers. But it also lets teams find drift, identify suspicious privilege use, and see unexpected process execution. And it works in containers that didn’t exist 5 minutes before. In parallel, implement policy-as-code enforcement to shift control closer to the developer workflow.
With insight and guardrails, focus on integrated response integration. In orchestrated environments, alerts must trigger real-time containment. That should include automated quarantining of non-production workloads, scaling down exposed services, or tagging suspicious pods for live investigation. Tools that integrate with orchestrators and cloud APIs can enforce this logic dynamically.
Finally, implement decentralized risk prioritization. At this stage, teams might find they’re buried in telemetry. They’ll need tooling that combines identities, exposures, privileges, and runtime behavior with business context, eliminating duplicate alerts and correlating signals to provide high-fidelity alerts that require attention.
These steps suggest tooling changes, but they aren’t about tools. Ultimately, they’re about a strategic progression of capabilities that align security with how an orchestrated infrastructure actually behaves.
Getting Started: Comparing Orchestration Platforms (with Pros and Cons)
While Kubernetes dominates the orchestration conversation today, it’s not the only option, and it wasn’t the first. Container orchestration platforms vary in complexity and flexibility, but all aim to deploy, manage, and scale containers. In that sense, the overarching security strategy behind securing them and the containers they orchestrate remains largely unchanged between platforms. However, each does come with different security implications and features. So let’s compare popular options.
Kubernetes: The Industry Standard
The open-source, extensible platform, Kubernetes, is generally deployed either in a company-owned bare-metal server or outsourced to a cloud provider in a model called Kubernetes as a Service, or KaaS.
There are a few common benefits to using Kubernetes, including.
- An extensive ecosystem: The community-driven favorite comes with a massive and active community, vast tooling, and integrations with virtually every cloud provider and technology.
- It’s highly scalable and flexible: Kubernetes is capable of managing very large and complex container deployments. At its heart is a control loop that constantly reconciles the desired state with the actual state, so it’s inherently more stable at a large scale than systems that require imperative instructions.
- Kubernetes comes with a rich feature set: Kubernetes provides advanced features such as auto-scaling, self-healing, rolling updates, and sophisticated networking options.
Kubernetes is not without its issues, however, which can include:
- A steep learning curve: Kubernetes is a flexible and customizable option, which means that it can be complex to set up, configure, and manage, especially for beginners.
- Operational overhead: Kubernetes requires dedicated expertise for effective management and troubleshooting.
Docker Swarm: Simpler but Limited
Docker Swarm is a less powerful orchestration tool than Kubernetes. What it lacks in scope can be offset by its direct integration into Docker. With many organizations building their containerized applications in Docker, leveraging a built-in orchestration tool can be a no-brainer DevOps teams.
The benefits of using Docker Swarm over another solution include:
- It’s easy to learn and use: Docker Swarm integrates seamlessly with Docker and has a simpler architecture compared to Kubernetes. This can be useful for organizations with limited resources and smaller DevOps teams, as compared to Kubernetes, which requires more expertise to learn and configure.
- It’s lightweight: Docker Swarm requires fewer resources than Kubernetes to run. It’s not as large a system as Kubernetes, making it easier to deploy.
- It’s great for smaller deployments: Docker Swarm is a perfect solution for simpler applications and smaller teams already invested in Docker.
Some negatives of using Docker Swarm to be aware of include:
- Limited scalability: It’s not as robust for managing very large and complex deployments compared to Kubernetes.
- Smaller ecosystem: There are fewer third-party integrations and community support compared to Kubernetes.
- Fewer advanced features: Lacks some of the more sophisticated features offered by Kubernetes. Need custom resource definitions? Sidecar patterns and service mesh integrations? Pos-level network policies? Admission controllers to enforce fine-grained security at deployment time? Docker Swarm won’t provide them.
Nomad: Lightweight and Flexible
While Kubernetes and Docker dominate enterprise orchestration, other choices may work best for teams seeking alternative solutions, such as the simplicity at scale offered by Nomad.
Described as easy to run and maintain, Nomad is designed to natively handle multi-datacenter and multi-region deployments with high scalability. It is cloud-agnostic and is commonly considered a less complex alternative to Kubernetes, since it avoids the operational overhead and complicated learning requirement, but keeps some of the flexibility teams like in Kubernetes. It supports a variety of workload types, not just containers, so Nomad can be appealing for mixed-infrastructure environments.
Some of the benefits of using Nomad are:
- A simple and elegant architecture: Nomad is easier to understand and operate than Kubernetes.
- Its lightweight and resource-efficient approach: There’s minimal overhead and can run on various operating systems.
- Support for multiple workloads: Nomad can orchestrate both containers and non-containerized applications, like Java applications or raw binaries.
- Good performance: Nomad is known for its speed and efficiency.
Some of the potential negatives with Nomad are:
- An underdeveloped community and ecosystem: With fewer integrations and community support compared to Kubernetes, Nomad is a smaller community of developers, but that doesn’t there is no community for best practice sharing.
- A less mature feature set: While actively developed, it may lack some of the advanced features found in Kubernetes. This speaks to an earlier maturity of the technology as compared to the more advanced Kubernetes. There’s no native service discovery, network policies, namespaces for multi-tenancy, custom resource definitions, service mesh integration, or pod lifecycle management. Similar to Docker Swarm in its features, Nomad keeps it lightweight.
Feature | Kubernetes | Docker Swarm | Nomad |
Scalability | Excellent | Good | Very Good |
Learning Curve | Steep | Easy | Moderate |
Ecosystem | Vast | Limited | Growing |
Securing Orchestration is a Mindset Shift, Regardless of Orchestrator
As enterprises scale, each orchestration platform handles key concerns, from identity to scheduling and workload isolation, differently.
Kubernetes exposes granular RBAC, supports namespaced multi-tenancy, and offers native constructs like Network Policies and admission controllers. But it also requires significant configuration and observability tooling to tie those controls to real risks.
Nomad, by contrast, is simpler and lighter, but lacks native network segmentation and deep access control. It relies on companion tools for some basic functionalities that Kubernetes handles natively.
Docker Swarm goes even farther toward simplicity, with minimal access control, weak tenancy boundaries, and a small ecosystem. Even ECS, while deeply integrated into AWS IAM, doesn’t natively support workload-level runtime visibility or pod-specific policy enforcement.
These differences matter because a security model must align with the orchestrator’s capabilities and gaps. So, policies that work in Kubernetes, like requiring all workloads to run with specific service accounts and enforcing image provenance through admission controls, have no equivalent in Docker Swarm. Likewise, in Nomad, in a multi-datacenter environment, teams will need to architect service discovery and secrets management externally.
Remember, platform differences are only part of the challenge. There’s a strategic shift to make, too: in orchestrated environments, workloads are no longer static assets to be scanned or firewalled. They are ephemeral, identity-bound processes. They’ll need security built into how they’re scheduled, deployed, and observed.
It means balancing shift-left with shift-right approaches and incorporating continuous, identity-aware visibility. And it means adopting tools that understand and participate in orchestration.
Upwind Rethinks Security at the Speed of Orchestration
Orchestration represents a transformation in how organizations behave — detection and enforcement have to happen in real time. Upwind is built for the shift. It offers continuous, real-time visibility into orchestrated workloads across Kubernetes, containers, cloud services, and identities. By correlating runtime behavior with orchestrator context and cloud IAM, Upwind knows what’s running and who’s running it, along with whether it’s behaving as expected. Those controls aren’t add-ons after deployment, either. They’re built-in controls that leverage deep understanding to inform automation and workload-aware responses.
To see how you can secure orchestration with observability, control, and identity correlation, schedule a demo.
FAQs
What about services like AWS Fargate?
Yes, AWS Fargate, Azure Container Instances, and Google Cloud Run provide orchestration, but abstract most of it away. Teams don’t manage the scheduler directly, but orchestration still happens under the hood to deploy, scale, and run containers. Here’s how Fargate works:
- It schedules containers on AWS-managed infrastructure automatically
- It integrates with ECS or EKS as the control plane
- There’s no direct access to nodes, schedulers, or control loops
- Security and visibility depend on the orchestrator (ECS or EKS), not Fargate itself
- Fine-grained policy control and runtime visibility often require add-on tools
How do container orchestration platforms handle secrets and credentials?
Most orchestration platforms offer a way to inject secrets into workloads, but the level of security and integration varies. Kubernetes has native support, while others rely more heavily on external tools. Here are some key points to remember:
- Kubernetes supports Secrets as a built-in resource, but stores them base64-encoded, not encrypted by default unless configured
- SEcrets can be mounted as files or exposed as environment variables
- Best practice is to pair Kubernetes with external secret managers
- Nomad does not manage secrets natively, and typically integrates with external tools
- Docker Swarm provides in-memory secret distribution to nodes, but lacks fine-grained access controls
Can I use orchestration without Docker?
Yes, you can use container orchestration without Docker. While Docker is a popular containerization platform and often used with orchestration tools like Kubernetes, other container runtimes and orchestration platforms exist. You don’t need Docker to orchestrate applications.
Kubernetes uses the Container Runtime Interface (CRI) and supports runtimes like containerd and CRI-O. Docker itself is being phased out as a default runtime in Kubernetes. And containerd, once a Docker component, is now widely used independently in production.
Modern orchestration platforms are runtime-agnostic, and Docker is just one of several compatible options.
What are the most common container orchestration security vulnerabilities?
Orchestrators expand the attack surface with new components, identities, and configurations. Most vulnerabilities stem from misconfigurations, over-permissive access, and a lack of runtime controls, including:
- Exposed Kubernetes APIs due to weak authentication or public access
- Over-permissive RBAC roles
- Unrestricted network communication between pods or services
- Use of insecure container images without verification or scanning
- Lack of runtime visibility, making malicious activity hard to detect
- Secrets stored unencrypted
- Compromised service accounts
What role does container orchestration play in CI/CD pipelines?
Container orchestration plays a crucial role in CI/CD pipelines by automating the deployment, management, and scaling of containerized applications, enabling faster, more reliable, and scalable software delivery. It essentially acts as the “maestro” for containerized applications, ensuring they are deployed and managed consistently across different environments.