Applications need input to function on any operating system, and containers are no exception, needing instructions to run and perform their intended operations. However, given the scale at which containers are deployed within organizations, managing them manually is impractical. 

This is where container runtimes come in. While container runtimes solve some challenges, they also manage the interface between the container and host system, which poses a new security challenge. This blog post will examine container runtimes and their function within a container architecture, as well as key considerations for security teams. 

What is a Container Runtime? 

A container runtime is a software program designed to unpack a container image and translate that into a running process on a computer. The runtime interacts with the environments to perform its functions – whether in the cloud, on a bare-metal server, or running on a Linux host – and its core function is to pull the container image and translate everything within it into a functioning application. 

Container runtimes are thus essential when using containerized workloads. Without a runtime, containers remain static, non-functional images of applications. Runtimes in all their variants ensure containers operate effectively throughout their entire lifecycle. 

Container Runtime Functions and Responsibilities 

Container runtimes are designed to facilitate the execution and management of containers. They have three core responsibilities within the container ecosystem: 

  • Container Execution
  • Interaction with the host OS 
  • Resource allocation and management

Container Execution

Container runtimes execute containers and manage them throughout the entire lifecycle. This also includes monitoring container health and restarting if the container fails during normal operation. Runtimes will also clean up when the container completes its tasks. 

Status of running containers in a CNAPP
 A look at the status of running containers in a CNAPP.

Interacting with the Host Operation System 

Container runtimes use features of the operating system, like namespaces and cgroups, to isolate and manage resources for the container workloads. The idea is to isolate containers so that the processes inside them cannot disrupt the host or other containers to ensure a secure environment.

Isolation mechanisms in place, where containers are confined and interactions are monitored.
Security in a CNAPP showing isolation mechanisms in place, where containers are confined and interactions are monitored.

Container Resource Allocation

Container runtimes allocate and regulate CPU, memory, and I/O for each container. By doing this, runtimes eliminate monopolization of resources – a potential major stumbling block in multi-tenant environments. The efficient management of these resources, in coordination with the host OS, is one of the reasons containerization is so widely adopted in modern software development.

How Does a Container Runtime Work? 

Back in 2015, the Open Container Initiative (OCI) was launched to define common standards for container formats and container runtimes. Although Docker created the OCI, it has since been supported by major cloud-native companies like Amazon Web Services, IBM, Microsoft, Alibaba Cloud, and OpenStack. 

OCI, also backed by the Linux Foundation, defines standards that runtimes must follow and are based on three key specifications: 

  1. What the actual container image includes 

This defines the contents of a container image, including application code, dependencies, libraries, and configurations that make the container functional.

2. How runtimes can retrieve container image

Container runtimes must follow specific protocols to fetch container images from registries or repositories, ensuring consistent platform management.

3. How container images are unpacked, layered, mounted, and executed

Runtimes must adhere to structures on how images are structured, layered, and executed. This ensures they can be efficiently decompressed, mounted, and run on any OCI-compliant platform. 

Following standards ensures interoperability within the container management ecosystem. 

In terms of process, the runtime follows this basic flow based on OCI standards: 

  1. The runtime is asked to create a container instance, including where the image can be retrieved and a unique identifier. 
  2. The runtime next reads and verifies the container image’s configuration.
  3. The runtime issues a start command, launching the main process specified in the container image’s configuration. 
    • The container’s root filesystem is mounted, and namespaces ensure isolation from the host and other containers. 
    • Resource limits are enforced by cgroups to ensure the container operates within defined quotas. 
    • The start is issued, launching a new process with a root file system set to the mount point created in the previous step. That way, the container can only see specific files and operate within the defined quotas and security policies. 
  4. Once finished, a stop is issued to shut down the container instance. 
  5. The container instance is deleted, which then removes all references to the container and cleans up the file system. 

Most of the time, users have no direct interaction with this process. Instead, they will use an orchestration platform like Kubernetes or a higher-level container engine, which handles container scheduling, scaling, and monitoring. Regardless, the above process is how runtimes interact with containers, even if not directly exposed to users. 

What are the Types of Container Runtimes?

There are three general types of container runtimes, separated out primarily based on how close they are to the individual containers themselves. They are: 

  • Low-level runtimes
  • High-level runtimes
  • Sandboxed and virtualized runtimes 

Low-level Container Runtimes 

Low-level container runtimes are the closest to the Linux kernel. They’re responsible for launching the containerized process and have the most direct interaction with containers. Low-level runtimes are also the ones that implemented the OCI standard, as the focus there is on container lifecycle management. These are the basic building blocks that make containers possible and do the actual unpacking, creating, starting, and stopping of the container instances.

The types of low-level container runtimes are: 

  • Runc — The defacto standard for containers. Docker originally created it, but donated it to the OCI in 2015.
  • Runhcs — A fork of runc that Microsoft created to run containers on Windows machines.
  • Crun — A runtime focused on being small and efficient with binaries of about 300kb in size.
  • Containerd — This runtime straddles the border between low-level and high-level because it includes an API layer. Users often interact with it via container engines like Docker or Kubernetes, although it can also be accessed directly through its API for advanced use cases. 

High-level Container Runtimes 

High-level container runtimes, as a general rule, offer greater abstraction than their low-level counterparts. They’re responsible for the transport and management of container images, unpacking, and passing off to the low-level runtime to run the container. These are the runtimes that users directly interact with. A few examples include: 

  • Docker (Containerd) — Docker, which uses Containerd under the hood, is the leading container system and is the most common Kubernetes container runtime with image specifications, a command-line interface, and a container image-building service, among others. 
  • CRI-OAn open-source version of the Kubernetes container runtime interface (CRI), an alternative to rkt and Docker. It enables running pods through OCI-compatible runtimes with support mostly for runc and Kata, but it is possible to use any OCI-compatible runtime. 
  • Windows Containers and Hyper-V Containers — Two alternatives to Windows Virtual Machines (VMs), available on Windows Server. Windows containers offer abstraction, such as Docker, while Hyper-V provides virtualization. Hyper-V containers offer numerous security benefits because they each have their own kernel, which empowers companies to run incompatible applications in their host systems. However, they also have the potential to introduce performance overhead (compared to regular Windows containers) due to virtualization. 
  • Podman — Red Hat built Podman as a more secure model than Docker’s original implementation. Along with Buildah and Skopeo, Podman’s goal is to provide the same high-quality experience as Docker. Unlike Docker, which uses a centralized daemon, Podman operates without a long-running process and runs containers as individual processes. 

Sandboxed or Virtualized Runtimes

Within the OCI standards are guidance for sandboxed or virtualized runtimes as well: 

  • Sandboxed runtimes — These runtimes offer greater isolation between the containerized process and the host because they don’t share a kernel. The runtime process runs on a unikernel or kernel proxy layer, which interacts with the host kernel, thus reducing the attack surface. Some examples are gVisor and nabla-containers. 
  • Virtualized runtimes — The container process is run in a virtual machine through a VM interface rather than a host kernel. This offers greater host isolation but can also slow down the process compared to a native runtime. Kata-containers is one such example of this class. 

A Word on Sandboxed Runtimes

Sandboxed container runtimes, such as gVisor, nabla-containers, and Kata Containers, provide stronger isolation than standard runtimes by running containers in a virtualized environment. For example, gVisor intercepts syscalls and runs them in a user-space kernel, so there’s less risk of container escapes. Similarly, Kata Containers uses lightweight VMs to isolate workloads from the host. 

But sandboxing isn’t a complete or ideal solution. Compared to standard runtimes, sandboxed environments incur additional overhead due to issues such as syscall emulation (gVisor) or VM boot times (Kata). Benchmarks show that gVisor has higher latency for system calls and increased CPU overhead, while Kata Containers take longer to start up.

High-security environments, such as financial services, government agencies, and multi-tenant cloud providers, have adopted sandboxed runtimes, particularly for their high-security workloads. For example, Google Cloud Run uses gVisor for enhanced security and could be a promising approach for workloads where security is paramount.

Container Runtime Interface (CRI) Explained

The CRI is a Kubernetes-specific API layer that enables the orchestrator to interact with various container runtimes. The CRI makes Kubernetes runtime-agnostic, instead of relying on a single particular runtime, such as Docker Engine. 

Without CRI, teams wouldn’t be able to choose the best runtime for their workloads based on more specific, unique factors, such as security and performance needs. And in the future, when innovative new runtimes emerge, Kubernetes will be able to incorporate them without re-architecting.

Because of the CRI, adaptable Kubernetes works with any OCI-compliant runtime.

What else do you need to know about this small but mighty component of the Kubernetes ecosystem? It’s all about making informed decisions about workloads. Here are key considerations:

  • Security postures differ across CRI runtimes. CRI-O is minimalist and security-focused. containerd comes with more features, but the plugins used to add them can broaden the attack surface.
  • CRI eliminates Docker but can introduce gaps. If teams rely on Docker tools, such as image scanning, they’ll likely need new solutions for CRI.
  • Compliance considerations have changed. CRI-based runtimes may lack built-in logging for container events, necessitating the use of new security tools. And teams can’t forget to integrate runtime logging with their existing XDR or SIEM solutions for forensic tracking.
  • Runtime drift happens. Security controls need to be enforced at the Kubernetes level to ensure consistency when teams use more than one runtime.

Common Container Runtime Tools: Engines vs. Orchestrators 

Within the container management universe, there are three classes of tools and processes used to operate containers no matter where they are hosted. 

  • Container runtime — Operates the container directly, as has been discussed.
  • Container engine — A software program that accepts user requests and includes a command-line interface, pulls images, and, from the user perspective, runs the container.
  • Container orchestrator – A piece of software managing sets of containers across different computing resources, handling network and storage configurations, and delegating to different runtimes.

It’s important to note that container engines can sometimes operate like runtimes and be used from within other tools like orchestrators. The difference among these is a matter of scale. Runtimes function directly on containers. Engines offer interfaces for handoff to runtimes on single containers. Orchestrators are used to manage multiple sets of containers across multiple environments. 

What CISOs Need to Understand About Container Runtimes

As containers become more common in more contexts, security teams and CISOs need to pay closer attention to keeping the runtime phase protected against attack and unintentional error. Much like with traditional applications, it’s easy for developers and DevOps teams to introduce mistakes or misconfigurations within containers that are not found until they try to unpack and run the image. 

As a result, CISOs need to ensure that mandatory scanning for vulnerabilities is included in container deployment processes, such as static scanning prior to deployment, sandboxing for dynamic testing, and behavior monitoring for runtime environments. Ensuring that this testing occurs can prevent the addition of unintentional errors and protect containers from known vulnerabilities. 

Runtime is especially a critical phase for testing, regardless of whether it includes containers, APIs, or traditional applications. After all, static code or image scanning can only surface a limited number of issues or known vulnerabilities. Dynamic testing is thus vital to resolve issues. 

A few of the common best practices CISOs and security teams can implement include: 

  • Securing container registries — Implementing access control for registries and image signing to ensure trackability and security. 
  • Securing container deployment — Reinforcing the host operating system, using strong firewall regulations, implementing role-based access control, and ensuring that containers are deployed with the least privilege necessary. 
  • Monitoring container activity — Understanding how containers act and monitoring them for any irregular activity is critical for ensuring the security of the container as well as the underlying host OS. 

Container runtimes are a potentially risky phase of container deployment, especially if the underlying architecture is not monitored or kept secure. CISOs need to ensure that their teams implement strong controls and testing for containers before and during runtimes to ensure that they remain protected against potential compromise. 

Upwind Secures Container Runtimes

Cloud-native deployments are becoming more common and, therefore, more vital to security. As the security implications of container runtimes are also better understood and accounted for, vendors who operate within the space will likely add more security features to ensure that containerized workloads are more protected. In addition, developers, operations professionals, DevOps teams, and security professionals alike need to ensure that, in the interim, they can protect their containers against threats. 

That’s why cloud-native application protection platforms like Upwind are so vital. Adding security for container runtime to track behavior, mitigate risk, and monitor workloads for risks means that developers and DevOps teams can protect their applications and the underlying host more readily. A comprehensive CNAPP also means that that CISOs and security teams have greater visibility for improved detection and response and alerts for potential incidents. 

To learn more about Upwind’s container runtime protection solution and get advice on best practices, schedule a demo

FAQ

What level of performance impact should we expect when implementing container runtime security?

The performance impact of implementing container runtime security varies from negligible to moderate, depending on the security measures implemented and individual workload demands. Some studies estimate the effect at a few percentage points. To estimate the effect, consider the following:

  • Runtime choice, with lightweight options like CRI-O and containerd offering better compatibility with Kubernetes than options like Docker Engine.
  • Security tools, with intrusion detection typically offering less performance impact than runtime scanning, although the impact depends on factors such as logging frequency, event processing, and enforcement options. Typically, intrusion scanning is lighter than runtime scanning, as it doesn’t actively scan binaries.
  • Policy enforcement, with stricter policy enforcement, like seccomp, introduces more latency.
  • Workload characteristics, particularly high I/O or low-latency apps, tend to feel the impact more than batch workloads.
  • Optimization, with CLI-based tuning and config adjustments to minimize slowdowns.

How does container runtime security integrate with our existing security tools?

Container runtime security integrates with existing tools by feeding runtime data into SIEM and XDR platforms, vulnerability management tools, and cloud security solutions, like CNAPPs. Here are the key integration points:

  • SIEM
    • Runtime logs and security events feed into centralized logging
    • Correlates container runtime alerts with broader enterprise threats
  • XDR and EDR
    • Detects behavioral anomalies in containerized environments
    • Extends response automation
  • Vulnerability Scanners
    • Scans container images before deployment for known vulnerabilities
    • Uses runtime insights to prioritize vulnerabilities based on real-world exposures
  • CNAPP
    • Combines runtime threat detection with image scanning to correlate vulnerabilities with real-world attack behavior
    • Helps enforce compliance with CIS benchmarks or NIST standards for CNCF-compliant security.

What metrics should we track to measure the effectiveness of our container runtime security?

Track metrics that assess accuracy, but also performance impact and response efficiency when evaluating a new container security program. Here are the key metrics:

  • Threat Detection Accuracy: False positive/negative rates to gauge alert reliability.
  • Incident Response Time: Average time to detect and contain security incidents.
  • Container Performance Impact: CPU, memory, and I/O overhead from security tools.
  • Policy Enforcement Success: Percentage of security policies correctly enforced.
  • Exploit Attempts Blocked: Number of runtime threats detected and mitigated.
  • Drift Detection Rate: Frequency of unauthorized container modifications.
  • Integration Effectiveness: Correlation of runtime security with broader security tools (SIEM, XDR). 

What’s the difference between agent-based and agentless container runtime security?

Agent-based (and sensor-based) security offers real-time protection, while agentless solutions provide broader visibility with less enforcement control. 

Agents and sensors deploy lightweight software components inside nodes and containers for deep visibility into syscalls, processes, and network activity. That enables real-time blocking of threats.

Angentless security uses API integrations or external monitoring, which it collects remotely. That means agentless monitoring comes with lower granularity and slower identification and response times. For that reason, it’s best for post-incident forensics rather than real-time protection and enforcement.

How does container runtime security differ across major cloud providers (AWS, Azure, GCP)?

All three primary cloud providers offer native container security, but runtime visibility typically requires third-party tools. AWS and Azure integrate tightly with native SIEM solutions, while GCP prioritizes event-driven security. Here’s a breakdown:

  • AWS
    • AWS GuardDuty detects container threats via logs.
    • AWS Inspector provides ECR image scanning.
  • Azure
    • Defender for containers offers runtime threat detection, vulnerability scanning, and Kubernetes policy enforcement.
    • Azure workloads can connect directly with Azure Sentinel, the platform’s native SIEM, for incident correlation.
  • GCP
    • Google Cloud Security Command Center provides event-based threat detection from logs.
    • Cloud IDS and GKE Workload Identity offer network detection, but not process-level monitoring.

What skills does our team need to manage container runtime security effectively?

Teams need a mix of security, container orchestration, and automation skills to expand their work to include these ephemeral workloads. Consider the following skills checklist:

  • Containerization Expertise: Understanding Docker, Kubernetes, and OCI-compliant runtimes (containerd, CRI-O).
  • Runtime Security Knowledge: Familiarity with intrusion detection tools, sandboxing, and policy enforcement.
  • Threat Detection & Incident Response: Analyzing logs, correlating runtime alerts, and responding to breaches.
  • Cloud Security & IAM: Implementing least privilege, runtime isolation, and workload identity management.
  • SIEM & XDR Integration: Integrating or feeding container logs into security tools.
  • Automation & Compliance: Using IaC to enforce security policies at scale.

How should our incident response process change when implementing container runtime security?

Incident response can take on a different form when responding in an ephemeral environment, and that means some processes will need to adapt to the containerized ecosystem. How? Here’s a checklist with the specifics:

  • Shift to Real-Time Detection: Use runtime security tools to detect anomalies before container termination.
  • Automated Containment: Implement policy-based auto-isolation to stop threats without the time lag that comes with manual intervention.
  • Log & Artifact Preservation: Capture forensic snapshots of compromised containers before they disappear and store logs in SIEM/S3/cloud storage.
  • Kubernetes-Specific Playbooks: Expand IR playbooks to include container restart policies, pod disruption responses, and node quarantine strategies.
  • Immutable Recovery Approach: Instead of patching, rebuild compromised images from scratch and redeploy securely.

What are container runtime compliance requirements?

While compliance frameworks differ, container runtime compliance requires containerized applications to meet requirements in three core areas: security, regulatory, and auditing. 

  • Container Security Standards: Follow CIS Benchmarks, NIST 800-190, and PCI DSS to secure runtime configurations.
    • Enforcement: Regardless of the standards, teams must enforce these policies. They can use Kubernetes Admission Controllers, OPA Gatekeeper, and Pod Security Standards (PSS) to enforce compliance at runtime.
    • Vulnerability Scanning at Runtime: They will also need to regularly scan running containers for CVEs and patch vulnerabilities without downtime.
  • Regulatory Compliance: GDPR requires runtime-level data protection controls to prevent unauthorized access, while HIPAA mandates encryption, logging, and access controls for healthcare workloads.
  • Auditing & Monitoring: Runtime security tools must log container activity for forensic investigations and compliance audits.

What container runtime is best for enterprises?

“The best” container runtime depends on security, performance, compatibility, and cost. Enterprises need to evaluate all these factors in the context of their own security stacks and operations. Here’s what to analyze:

  • Security: Sandboxing (gVisor, Kata), syscall filtering (seccomp), vulnerability management.
  • Performance: Startup time, syscall overhead, resource efficiency.
  • Compatibility: Kubernetes CRI compliance, OCI support, security stack integration.
  • Management Overhead: Complexity of updates, patching, operational tuning.
  • Cost Considerations: Licensing, infrastructure usage, and TCO.
RuntimeSecurity FeaturesPerformanceEnterprise SupportBest Use Case
CRI-OMinimal attack surface, seccompHighRed Hat, SUSEKubernetes-native security
containerdModular, needs pluginsHighCNCF-supportedGeneral Kubernetes workloads
gVisorSyscall sandboxingMediumGoogle CloudMulti-tenant environments
KataLightweight VMsMedium-LowAlibaba, OpenStackConfidential computing

What are container runtime security best practices?

Securing container runtimes requires a defense-in-depth approach that addresses vulnerabilities in multiple layers with continuous monitoring and enforcement. Here is the high-level checklist that will apply to all container security strategies and form the basis of more individualized plans:

  • Harden the Runtime: Use minimalist runtimes (CRI-O, gVisor, Kata) to reduce attack surfaces. Enable seccomp, AppArmor, and SELinux for syscall filtering.
  • Enforce Image Security: Require signed and scanned images with admission control policies (OPA Gatekeeper) to block unverified containers.
  • Mitigate Common Vulnerabilities: Prevent privileged containers, insecure host mounts, and unpatched images. Restrict dangerous syscalls using seccomp profiles.
  • Implement Network Segmentation: Apply Kubernetes NetworkPolicies to limit container-to-container communication and prevent lateral movement.
  • Security Monitoring Framework: Use tools for syscall monitoring, CNAPPs, and SIEM integrations to detect runtime threats in real time.
  • Automate Response & Compliance: Define incident response playbooks for compromised containers, enforce runtime compliance policies, and integrate logs into XDR/SIEM tools.