#Cyber Risks

Demystifying Cyber Risks: Containers

Wes Ladd

November 28, 2022 - 11 min read

Key Containers Concepts

Containers are not the "next big thing"; they are already here, and businesses across industries are rapidly increasing container adoption.  

  • Containers are a packaging mechanism that includes everything needed to run an application on a host system.
  • While there are some similarities, containers are separate and distinct from Virtual Machines (VMs).  VMs are abstracted at the hardware layer, while containers are abstracted at the operating system (OS) level. 
  • In its simplest form, containers run all application components independently while leveraging a single shared instance of an operating system kernel.

Why Security Auditors Should Care

  • Containers offer significant risk reduction benefits thanks to the potential for a reduced attack surface through removal of unneeded functionality.
  • By design, containers are “immutable”;  when a change is made, a new container must be created, thus reducing the risk of configuration error.
  • Containers are inspectable. Automated tools can be leveraged to validate integrity against established baselines.
  • Containers reflect the inherent management challenges of breaking down monolithic applications into independent application components, known as microservices.
  • While attack surface is reduced if a vulnerability is present, successful exploitation may impact all containers sharing the operating system instance – this attacker technique is commonly called a “container breakout”.

Container Assurance Considerations

Through this post and the associated mini-course, cyber risk assessors will learn about:

  • Why attackers who gain access to a container may be able to access other containers hosted on the same system.
  • Why organizations should be careful to separate workloads of different sensitivities so they don't share the same host system.
  • How a container's lifecycle has changed patch and vulnerability management practices.
  • How to scan a container image for security vulnerabilities and what the results may tell us.


Industry research estimates that over half of businesses in the financial service, healthcare, telecommunications, and retail sectors already use containers in their production systems. Furthermore, a study by Tripwire in 2019 indicates 60% of surveyed security professionals who have containers in their environment reported having security incidents associated with container usage.

IT teams are at a crossroads. They are rapidly adopting containerized architectures but may not have the expertise to secure these new environments effectively, resulting in increased risk to their organizations.

As security professionals, we must understand container technologies to clearly articulate risks being introduced to the organization. This allows us to make informed recommendations regarding enhancements to technical and process-based controls in containerized environments.

So what are containers? How do they compare to similar technologies, such as Virtual Machines (VMs)? And what are the most significant risks we need to be aware of when reviewing their implementation and operation?

We will answer these questions in the following post. If you would like to learn more, sign up for the Train GRC Academy and enroll in the free mini-course, “Demystifying Cyber Risks: Containers”.


What is a Container?

Containers are a packaging mechanism that includes everything needed to run an application: source code, runtime, system tools, system libraries, and settings. This packaging mechanism allows container-based applications to be deployed quickly and consistently, regardless of the target environment

So far, this sounds a lot like a VM, right? Well yes; to better describe what containers are, let's compare and contrast them with VMs.

Containers vs. Virtual Machines

Although there are some similarities in use cases, containers are very different from VMs.

Containers and VMs have similar resource isolation benefits, but they are functionally different because containers virtualize the operating system instead of the hardware.

VMs are abstracted at the physical hardware. Each VM includes a full copy of an operating system, the application, necessary binaries, and libraries. VMs run on a hypervisor. A hypervisor enables sharing of a host computer's physical resources between multiple virtual machines, each running its own copy of the operating system.

In contrast, containers are abstracted at the operating system layer. Each container bundles application code and dependencies together, sharing the host kernel across multiple containers.

Container image build files are built into container images and executed by a container runtime on the host operating system. Since containers share a host kernel, they don't need to boot an entire Operating System. This enables containers to be more efficient and lightweight when compared to most virtual machines technologies.

The figure below probably provides the most straightforward depiction of the difference between Containers and VMs.

Virtual Machine vs. Containerized Deployment Infrastructure

Container Components

Now we have a basic understanding of the distinction between containers and VMs, let's take a deeper dive into some of the underlying technologies that power containers.

Container Configuration Files

Container instructions are text-based, human writeable, and machine-readable set of commands for building a container image.

Container instructions are commonly written in a file format known as Dockerfile, and adhere to a standard format and instruction set. Each new line of instruction adds a layer to the image. For instance:

  • FROM ubuntu: 18.04
  • COPY . /app:
  • RUN make /app:
  • CMD python3 /app/main.py:

This Dockerfile can be used to create a container image. Each of the instructions in the Dockerfile adds a layer to the container image and, when you run the command:

  • docker build -t python-app:v1 .

with a Dockerfile in your current directory containing those instructions, the Docker container runtime will:

  • Create a layer from the ubuntu:18.04 container image
  • Copy files from the host to the /app directory in the second layer of the container image
  • Build the application using a software utility called make in the third layer
  • Specify a Python 3 program to run on container image launch in the final layer
  • Tag (-t) your newly built container image with the name “python-app” and tag “v1”

A Dockerfile can contain multiple "FROM" instructions. This is known as a "multi-stage build". Multi-stage builds allow us to create images that are derived from multiple base images. The primary benefit of multistage builds is they reduce the final image size, and potentially, the attack surface presented by production containers.

Container Image

Container images are templates built from the set of instructions written in a Dockerfile or other configuration file format.

Container images define what you want your packaged application and its dependencies to look like and what software processes to run when it's launched.

Container images consist of at least one base image layer and may consist of multiple additional layers. Layers represent a portion of the image's file system and are key to containers' lightweight yet robust structure.

A recent advancement with container images is the use of "distroless images" as the final layer of a multi-stage build process. Open-sourced by Google in 2020, distroless images strip away most features of a standard Linux distribution. This feature means distroless images significantly reduce the container's attack surface, as unneeded Linux features such as interactive shells, package managers, or any other application you would expect to find in a standard Linux operating system are no longer included. Essentially, only your application and runtime dependencies are included in the container image.

Container Instances

Container instances are deployed instances of a particular container image. When a container image is launched, the container runtime adds a writeable layer to the image, known as the container layer. This layer stores all the changes to the container throughout its runtime. This process allows the underlying container image to be shared across multiple running containers while maintaining their own individual states.

Container Image Layers

Container Volumes

Data storage volumes allow a container to access persistent file storage on the host file system. Data volumes exist as regular directories and files on the host file system. When a container is destroyed, updated, or rebuilt, the data volumes won’t be impacted. This is a key capability to support applications built on ephemeral technologies such as containers, while still maintaining persistent data storage.


Container Management Technologies

If you are involved in container environments, you will have likely heard about certain popular technologies (Kubernetes anyone?) used to manage container environments. Container management technologies include container runtimes, orchestration tools, container registries, and container service meshes.

Container Runtimes

Container engines, or container “runtimes”, are utilities that run containers on a host operating system. They are responsible for loading container images from a repository, monitoring local system resources, isolating system resources for a container, and managing the container lifecycle.

Container runtimes are commonly split into two categories:

Low-level Container Runtimes
  • These utilities are responsible for the kernel-level interaction and configuration. They are responsible for operations such as setting up namespaces and cgroups for a container to execute.
  • Low-Level Container Runtimes include runC, crun, and containerd.
High-Level Container Runtimes
  • These utilities are responsible for providing image management and for allowing developers to build and execute their container images.
  • High-Level Container Runtimes include Docker, CRI-O, and Windows Containers.

Container Orchestration Tools

Container orchestration technologies manage containers using automation. Orchestrators are responsible for managing clusters of containers and taking care of the deployment, management, scaling, and networking of containers and the underlying hosts.

One of the most popular orchestration tools is Kubernetes. It is pervasively deployed across a variety of industries and organizations. You may also encounter OpenShift, another container orchestration tool developed by Red Hat.

Container Registries

Container registries are catalogs of storage locations, known as repositories, where you can push and pull container images.

One of the most popular public container registries is Docker Hub. Major cloud providers also maintain container registries for their customer’s use.

Container Service Mesh

Service meshes can be used to manage network traffic between containerized services. Service meshes are commonly layered on top of orchestration infrastructure, restricting unnecessary network communication channels (ports and services) between unrelated services.

Pros and Cons of Using Containers

Containerization is only one option when deciding how to deploy an application. We need to weigh the pros and cons of using containers against other deployment options such as base operating system deployment or virtual machines.

Below are some of the security-focused pros and cons that should be considered when deciding if containers are suitable for your applications and organization.



The standardized format for packaging all the components necessary to run the application within a container allows for portability between Operating Systems and Cloud environments.

Immutable and Ephemeral Design Patterns

Containerization promotes an immutable and ephemeral architecture.

Immutable - unchanging over time or unable to be changed. This means that a built container image will perform the same time and time again when run on a container runtime. The same files and programs and configuration settings will be maintained for the life of that container image.

Ephemeral - lasting for a very short time. This means that rather than modifying container images that have been deployed, it is more practical to build a new (modified) image with the relevant updates and deploy the new container image, discarding the original image.Unlike servers or virtual machines (which are both generally considered long-living infrastructure), containers are not designed to be easily modified once deployed, instead are designed to be easily refreshed with a new image.

Once deployed, there should be little to no need to modify the configuration of a deployed container - as we mentioned, since it is immutable it will perform consistently. This increases the confidence in the state of the deployed service. When a change needs to be made to deployed containers, the ephemeral design patterns allow them to be quickly and easily destroyed and replaced with an updated image. These patterns have significantly modified organization controls for vulnerability and patch management.

Specifically, organizations have come to rely more heavily on refreshing images to capture automatic updates to base container images. This reduces the likelihood that vulnerabilities persist in an environment, as images may only live for days or hours. However, it can also make it challenging to understand what images are deployed and what vulnerabilities are present over a period of time in your containerized environment.

Potential for Attack Surface Reduction

Unlike traditional "monolithic" application deployments, where many services are running on one server or virtual machine, containerization promotes the use of one service per container. This provides developers and systems administrators with a clearer understanding of the purpose(s) of a specific container image.

This more precise understanding of purpose allows the container to be configured with a more limited number of services and ports exposed to other systems. It also creates an opportunity to remove unnecessary functionality from the final container image. Unnecessary software should not make its way into production container images.

Additionally, when using containers in complex configurations associated with container “orchestration” platforms such as Kubernetes or OpenShift, the use of a container networking technology known as a “service mesh” provides another opportunity for greater control over which services can communicate with each other and which are publicly exposed.


Container images are defined and configured based on configuration files (such as a Dockerfile). These files provide a manifest of the software and configuration of a deployed container. Each container image is saved as a compressed (.tar) file on the host where it is built.

Because a container image is a standardized .tar file, automated tools can easily inspect manifests and audit for security vulnerabilities. Automated open source tools such as Clair, Trivy, and grype can assess the security of a container image.


Unlike VMs, Containers share the host OS kernel, eliminating the need for a complete OS instance per application, making containers more lightweight than most comparable VMs or "bare metal" hosts.


Weak Host Isolation

Unlike VMs, containers share the underlying host kernel. This reduces the isolation between containers, the host, and other containers running on the same system.

In the worst case scenario, a vulnerability in a particular container image or host OS may adversely impact all the containers deployed on that system. This is a key concern in environments that are subject to scoping considerations, such as those for PCI-DSS (CDE). If mixed data sensitivity workloads are jointly hosted on a single host system, compromise of one container may represent compromise of multiple workloads.

In fact, don’t take my word for it - here is container security expert Rory McCune on HackerNews:

@hasheddan tweet - @raesene Container Security Hacker News Post

Management Complexity

When breaking a monolithic application down into containerized microservices, the complexity of deployment increases. Each deployed container image must be managed, scaled, and integrated with other micro-services effectively. An increase in complexity raises the opportunity for mistakes to be made for security-critical configurations.

Runtime Monitoring Challenges

Compared to servers and virtual machines, effective runtime monitoring can be more complex for an organization to implement with existing toolsets. Many of the benefits mentioned above (such as ephemeral and lightweight) make traditional approaches to monitoring difficult within containers.

A container’s deployment lifecycle may only last for minutes or seconds as it is deployed only long enough to perform its required operation and be destroyed. In this sort of timeframe, monitoring systems that use polling mechanisms for observability may not poll frequently enough to even detect if a particular container existed.

Monitoring tools which rely upon underlying operating system services, such as SSH (Port 22) or SMB (Port 445), may have challenges communicating with containers as these services are rarely exposed.

Quickly Evolving Technology

Container technologies are rapidly changing. IT teams are tasked with remaining current in their understanding of the changes, even as complexity grows. This can result in steep learning curves, a lack of up-to-date reference material, and a shortage of qualified professionals.

While both pros and cons exist for the adoption of Containers, the industry has generally decided that the pros outweigh the cons and are powering ahead in the implementation of containerized environments.

Container Security Threats

As with any new technology introduced into an organization's IT infrastructure, relevant cybersecurity threats must be identified and analyzed, so that relevant controls can be implemented and validated.

Here are 3 common threats that may be introduced into an organization through the implementation of containerized systems.

Container Escape/Breakout

Many security vulnerabilities related to the implementation of containers result in what is commonly known as a "container escape" threat. A container escape, or breakout, occurs when an attacker can escape the container isolation and access resources of the host operating system. In a container breakout, the attacker can interact with the host OS. Once such a breakout occurs – all other workloads hosted on that system are also compromised, in most cases.

Container Breakout Diagram

Common causes of container breakout include:

  • Host OS kernel vulnerability - at least 18 of these have become public since 2016
  • Containers running in "Privileged" mode
  • Exposed Container Runtime (Docker) Socket
  • Exposed host OS process

As mentioned previously, a significant security concern with using containers is that they have less isolation from the host OS compared to Virtual Machines. If a vulnerability exists in the host kernel, it could impact all containers running on that host.

For example, the Dirty Pipe vulnerability identified in March 2022 was a vulnerability that allowed an attacker to modify read-only files on the Linux filesystem. In specific configurations, this vulnerability enabled an unprivileged user within a container to alter the underlying host system, resulting in the compromise of all other containers running on the same host.

Untrusted Container Images

Containers are generally built using a base image imported from some third-party repository, such as Docker Hub. However, like any piece of code, images and their dependencies can and do contain vulnerabilities.

Attackers are also known to create their own malware-infested containers and upload them to public repositories for unsuspecting victims to use. In one example from August 2020, a malicious container image labeled "Alpine2" (typo-squatting on the popular "Alpine" image) was found in Docker Hub. This malicious image installed and executed a cryptocurrency miner within any container that included it within their container build.

Aside from purposefully malicious images, untrusted images may be poorly configured or implement unnecessary services. This results in large bloated images, increasing their resource utilization, slowing down deployments, and increasing the container's attack surface.

Stale Container Images in Production

While software development processes associated with containers encourage the frequent replacement of stale container images, some container images may end up living for a long time in your environment. Such container images may stretch the definition of "ephemeral".

Organizations that rely on frequent replacement of container images to address patch management requirements may find that a small percent of container images exist in their environment for 30, 60, or 90 days and beyond without update. This lack of image refresh increases the risk that containers may include unpatched vulnerabilities. In the event attackers identify vulnerabilities in a container image, they may be able find or craft an exploit to use for gaining initial access and compromising your organization's IT systems.


Containers are a critical part of today's IT infrastructure, and their adoption is rapidly increasing. Whether using Kubernetes, OpenShift, or a managed cloud service, containers are deployed across a wide base of organizations in a diverse set of circumstances.

As a security auditor, understanding a few fundamental container concepts and approaches can go a long way towards understanding the tradeoffs between what containers provide an organization and what risks they may also introduce. If you want to gain hands-on experience building container images and running vulnerability scans against containers, try out our free mini-course on Demystifying Cyber Risks: Containers.

Wes Ladd

November 28, 2022 - 11 min read

Wes Ladd

November 28, 2022 - 11 min read


 What is Reconnaissance

Wes Ladd

November 28, 2022 - 11 min read

A Scalable AWS Enumeration and Footprinting Tool

This article is intended to serve as a “blue...

Wes Ladd

November 28, 2022 - 11 min read

NASBA CPE Accredited Course:
  • To ensure the satisfaction of Train GRC course participants, all eligible courses purchased may be refunded within 30 days. For complaints or refund requests, please contact Train GRC at .
  • Train GRC is registered with the National Association of State Boards of Accountancy (NASBA) as a sponsor of continuing professional education on the National Registry of CPE Sponsors. State boards of accountancy have the final authority on the acceptance of individual courses for CPE credit. Complaints regarding registered sponsors may be submitted to the National Registry of CPE Sponsors through its website: www.nasbaregistry.org.