Upwind raises $250M Series B to secure the cloud for the world →
Get a Demo

Applications need input to function on any operating system, and containers are no exception, needing instructions to run and perform their intended operations. However, given the scale at which containers are deployed within organizations, managing them manually is impractical. 

This is where container runtimes come in. While container runtimes solve some challenges, they also manage the interface between the container and host system, which poses a new security challenge. This blog post will examine container runtimes and their function within a container architecture, as well as key considerations for security teams. 

What is a Container Runtime? 

A container runtime is a software program designed to unpack a container image and translate that into a running process on a computer. The runtime interacts with the environments to perform its functions – whether in the cloud, on a bare-metal server, or running on a Linux host – and its core function is to pull the container image and translate everything within it into a functioning application. 

Container runtimes are thus essential when using containerized workloads. Without a runtime, containers remain static, non-functional images of applications. Runtimes in all their variants ensure containers operate effectively throughout their entire lifecycle. 

Container Runtime Functions and Responsibilities 

Container runtimes are designed to facilitate the execution and management of containers. They have three core responsibilities within the container ecosystem: 

Container Execution

Container runtimes execute containers and manage them throughout the entire lifecycle. This also includes monitoring container health and restarting if the container fails during normal operation. Runtimes will also clean up when the container completes its tasks. 

Status of running containers in a CNAPP
 A look at the status of running containers in a CNAPP.

Interacting with the Host Operation System 

Container runtimes use features of the operating system, like namespaces and cgroups, to isolate and manage resources for the container workloads. The idea is to isolate containers so that the processes inside them cannot disrupt the host or other containers to ensure a secure environment.

Isolation mechanisms in place, where containers are confined and interactions are monitored.
Security in a CNAPP showing isolation mechanisms in place, where containers are confined and interactions are monitored.

Container Resource Allocation

Container runtimes allocate and regulate CPU, memory, and I/O for each container. By doing this, runtimes eliminate monopolization of resources – a potential major stumbling block in multi-tenant environments. The efficient management of these resources, in coordination with the host OS, is one of the reasons containerization is so widely adopted in modern software development.

How Does a Container Runtime Work? 

Back in 2015, the Open Container Initiative (OCI) was launched to define common standards for container formats and container runtimes. Although Docker created the OCI, it has since been supported by major cloud-native companies like Amazon Web Services, IBM, Microsoft, Alibaba Cloud, and OpenStack. 

OCI, also backed by the Linux Foundation, defines standards that runtimes must follow and are based on three key specifications: 

  1. What the actual container image includes 

This defines the contents of a container image, including application code, dependencies, libraries, and configurations that make the container functional.

2. How runtimes can retrieve container image

Container runtimes must follow specific protocols to fetch container images from registries or repositories, ensuring consistent platform management.

3. How container images are unpacked, layered, mounted, and executed

Runtimes must adhere to structures on how images are structured, layered, and executed. This ensures they can be efficiently decompressed, mounted, and run on any OCI-compliant platform. 

Following standards ensures interoperability within the container management ecosystem. 

In terms of process, the runtime follows this basic flow based on OCI standards: 

  1. The runtime is asked to create a container instance, including where the image can be retrieved and a unique identifier. 
  2. The runtime next reads and verifies the container image’s configuration.
  3. The runtime issues a start command, launching the main process specified in the container image’s configuration. 
    • The container’s root filesystem is mounted, and namespaces ensure isolation from the host and other containers. 
    • Resource limits are enforced by cgroups to ensure the container operates within defined quotas. 
    • The start is issued, launching a new process with a root file system set to the mount point created in the previous step. That way, the container can only see specific files and operate within the defined quotas and security policies. 
  4. Once finished, a stop is issued to shut down the container instance. 
  5. The container instance is deleted, which then removes all references to the container and cleans up the file system. 

Most of the time, users have no direct interaction with this process. Instead, they will use an orchestration platform like Kubernetes or a higher-level container engine, which handles container scheduling, scaling, and monitoring. Regardless, the above process is how runtimes interact with containers, even if not directly exposed to users. 

What are the Types of Container Runtimes?

There are three general types of container runtimes, separated out primarily based on how close they are to the individual containers themselves. They are: 

Low-level Container Runtimes 

Low-level container runtimes are the closest to the Linux kernel. They’re responsible for launching the containerized process and have the most direct interaction with containers. Low-level runtimes are also the ones that implemented the OCI standard, as the focus there is on container lifecycle management. These are the basic building blocks that make containers possible and do the actual unpacking, creating, starting, and stopping of the container instances.

The types of low-level container runtimes are: 

High-level Container Runtimes 

High-level container runtimes, as a general rule, offer greater abstraction than their low-level counterparts. They’re responsible for the transport and management of container images, unpacking, and passing off to the low-level runtime to run the container. These are the runtimes that users directly interact with. A few examples include: 

Sandboxed or Virtualized Runtimes

Within the OCI standards are guidance for sandboxed or virtualized runtimes as well: 

A Word on Sandboxed Runtimes

Sandboxed container runtimes, such as gVisor, nabla-containers, and Kata Containers, provide stronger isolation than standard runtimes by running containers in a virtualized environment. For example, gVisor intercepts syscalls and runs them in a user-space kernel, so there’s less risk of container escapes. Similarly, Kata Containers uses lightweight VMs to isolate workloads from the host. 

But sandboxing isn’t a complete or ideal solution. Compared to standard runtimes, sandboxed environments incur additional overhead due to issues such as syscall emulation (gVisor) or VM boot times (Kata). Benchmarks show that gVisor has higher latency for system calls and increased CPU overhead, while Kata Containers take longer to start up.

High-security environments, such as financial services, government agencies, and multi-tenant cloud providers, have adopted sandboxed runtimes, particularly for their high-security workloads. For example, Google Cloud Run uses gVisor for enhanced security and could be a promising approach for workloads where security is paramount.

Container Runtime Interface (CRI) Explained

The CRI is a Kubernetes-specific API layer that enables the orchestrator to interact with various container runtimes. The CRI makes Kubernetes runtime-agnostic, instead of relying on a single particular runtime, such as Docker Engine. 

Without CRI, teams wouldn’t be able to choose the best runtime for their workloads based on more specific, unique factors, such as security and performance needs. And in the future, when innovative new runtimes emerge, Kubernetes will be able to incorporate them without re-architecting.

Because of the CRI, adaptable Kubernetes works with any OCI-compliant runtime.

What else do you need to know about this small but mighty component of the Kubernetes ecosystem? It’s all about making informed decisions about workloads. Here are key considerations:

Common Container Runtime Tools: Engines vs. Orchestrators 

Within the container management universe, there are three classes of tools and processes used to operate containers no matter where they are hosted. 

It’s important to note that container engines can sometimes operate like runtimes and be used from within other tools like orchestrators. The difference among these is a matter of scale. Runtimes function directly on containers. Engines offer interfaces for handoff to runtimes on single containers. Orchestrators are used to manage multiple sets of containers across multiple environments. 

What CISOs Need to Understand About Container Runtimes

As containers become more common in more contexts, security teams and CISOs need to pay closer attention to keeping the runtime phase protected against attack and unintentional error. Much like with traditional applications, it’s easy for developers and DevOps teams to introduce mistakes or misconfigurations within containers that are not found until they try to unpack and run the image. 

As a result, CISOs need to ensure that mandatory scanning for vulnerabilities is included in container deployment processes, such as static scanning prior to deployment, sandboxing for dynamic testing, and behavior monitoring for runtime environments. Ensuring that this testing occurs can prevent the addition of unintentional errors and protect containers from known vulnerabilities. 

Runtime is especially a critical phase for testing, regardless of whether it includes containers, APIs, or traditional applications. After all, static code or image scanning can only surface a limited number of issues or known vulnerabilities. Dynamic testing is thus vital to resolve issues. 

A few of the common best practices CISOs and security teams can implement include: 

Container runtimes are a potentially risky phase of container deployment, especially if the underlying architecture is not monitored or kept secure. CISOs need to ensure that their teams implement strong controls and testing for containers before and during runtimes to ensure that they remain protected against potential compromise. 

Upwind Secures Container Runtimes

Cloud-native deployments are becoming more common and, therefore, more vital to security. As the security implications of container runtimes are also better understood and accounted for, vendors who operate within the space will likely add more security features to ensure that containerized workloads are more protected. In addition, developers, operations professionals, DevOps teams, and security professionals alike need to ensure that, in the interim, they can protect their containers against threats. 

That’s why cloud-native application protection platforms like Upwind are so vital. Adding security for container runtime to track behavior, mitigate risk, and monitor workloads for risks means that developers and DevOps teams can protect their applications and the underlying host more readily. A comprehensive CNAPP also means that that CISOs and security teams have greater visibility for improved detection and response and alerts for potential incidents. 

To learn more about Upwind’s container runtime protection solution and get advice on best practices, schedule a demo

FAQ

What level of performance impact should we expect when implementing container runtime security?

The performance impact of implementing container runtime security varies from negligible to moderate, depending on the security measures implemented and individual workload demands. Some studies estimate the effect at a few percentage points. To estimate the effect, consider the following:

How does container runtime security integrate with our existing security tools?

Container runtime security integrates with existing tools by feeding runtime data into SIEM and XDR platforms, vulnerability management tools, and cloud security solutions, like CNAPPs. Here are the key integration points:

What metrics should we track to measure the effectiveness of our container runtime security?

Track metrics that assess accuracy, but also performance impact and response efficiency when evaluating a new container security program. Here are the key metrics:

What’s the difference between agent-based and agentless container runtime security?

Agent-based (and sensor-based) security offers real-time protection, while agentless solutions provide broader visibility with less enforcement control. 

Agents and sensors deploy lightweight software components inside nodes and containers for deep visibility into syscalls, processes, and network activity. That enables real-time blocking of threats.

Angentless security uses API integrations or external monitoring, which it collects remotely. That means agentless monitoring comes with lower granularity and slower identification and response times. For that reason, it’s best for post-incident forensics rather than real-time protection and enforcement.

How does container runtime security differ across major cloud providers (AWS, Azure, GCP)?

All three primary cloud providers offer native container security, but runtime visibility typically requires third-party tools. AWS and Azure integrate tightly with native SIEM solutions, while GCP prioritizes event-driven security. Here’s a breakdown:

What skills does our team need to manage container runtime security effectively?

Teams need a mix of security, container orchestration, and automation skills to expand their work to include these ephemeral workloads. Consider the following skills checklist:

How should our incident response process change when implementing container runtime security?

Incident response can take on a different form when responding in an ephemeral environment, and that means some processes will need to adapt to the containerized ecosystem. How? Here’s a checklist with the specifics:

What are container runtime compliance requirements?

While compliance frameworks differ, container runtime compliance requires containerized applications to meet requirements in three core areas: security, regulatory, and auditing. 

What container runtime is best for enterprises?

“The best” container runtime depends on security, performance, compatibility, and cost. Enterprises need to evaluate all these factors in the context of their own security stacks and operations. Here’s what to analyze:

RuntimeSecurity FeaturesPerformanceEnterprise SupportBest Use Case
CRI-OMinimal attack surface, seccompHighRed Hat, SUSEKubernetes-native security
containerdModular, needs pluginsHighCNCF-supportedGeneral Kubernetes workloads
gVisorSyscall sandboxingMediumGoogle CloudMulti-tenant environments
KataLightweight VMsMedium-LowAlibaba, OpenStackConfidential computing

What are container runtime security best practices?

Securing container runtimes requires a defense-in-depth approach that addresses vulnerabilities in multiple layers with continuous monitoring and enforcement. Here is the high-level checklist that will apply to all container security strategies and form the basis of more individualized plans: