Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Virtual Containers: 8 Basic Truths

  • Containers -- and specifically the Docker form of container technology -- have taken the industry by storm. The quick rise of Docker since it was first launched as an open source project in March 2013 has been particularly astounding. Developers and industry giants like Google are absolutely smitten with what is sometimes referred to as a lightweight version of virtualization.

    Based on all the hype, containers -- and especially what Docker does with them -- sound like a magic elixir for businesses and their cloud ambitions.

    Indeed, the potential for Docker is huge and highly disruptive, Charlie Dai, principal analyst at Forrester Research, wrote in a blog post earlier this year. The firm predicts that Docker-based solutions will disrupt the server virtualization market and drive cloud adoption because of their technology advantages, benefits such as agility and speed in responding to business requirements, and growing ecosystem.

    A white paper produced by Cisco and Red Hat is equally effusive about the potential of containerization: "Linux containers and Docker are poised to radically change the way applications are built, shipped, deployed, and instantiated."

    Network Computing talked with a couple virtualization experts to get their take on the container phenomenon and what infrastructure pros need to know about this hot technology. 

  • What is a virtual container?

    If you're confused about containers, you're not alone. Part of the problem is that people tend to erroneously equate containers with Docker, said Scott Lowe, an engineering architect and writer focused on virtualization, networking, and cloud computing. Linux "containers basically leverage features within the existing Linux kernel to provide a way to isolate processes, usually without as much overhead as full-machine virtualization such as that provided by KVM," he said.

    A Linux container uses Linux kernel features to boot up an instance of an operating system, which may or may not share the underlying Linux kernel and certain libraries from the host system, said Lowe, who works in VMware's NSX group. "Depending on how much is shared or not shared with the host will determine how much overhead that container uses."

    Containerization also is called operating system virtualization; hypervisors virtualize at the hardware level.

  • What is Docker?

    Docker is built on the same underlying kernel mechanisms as Linux containers, but rather than being used to run an entire operating instance, it provides a way to isolate a single process in a container, Lowe said. By providing a streamlined interface and a central public repository of images (the Docker Hub), Docker makes it very easy for someone to get a Docker container up and running in a matter of minutes.

    James Bottomley, CTO of virtualization at Parallels, described Docker as a "packaging and orchestration system" that allows an IT pro to run and manipulate applications in ways that are difficult for traditional virtualization.

  • Containerization is nothing new

    While Docker is a newer phenomenon, containers have actually been around for some time. FreeBSD Jails were a precursor to containers that enabled containerization tied to a portion of a file system, Bottomley said. Containerization technology has advanced since then, with developments such as the release of Solaris containerization in 2004, the growth of LXC (Linux Containers) over the past couple years, and companies such as Parallels working on open source projects like OpenVZ.

    Containerization "looks very new to the enterprise, but to people who have been in the service provider space, it's an old, proven technology," Bottomley said.

  • Key container benefits

    Clearly, a Linux container with its reduced overhead offers the benefits of speed and agility. Spinning up a new computer doesn't require running a full operating system. Containers are faster to boot up than hypervisors, denser because of the resource sharing, and much more elastic, Bottomley said. The added density -- which enables running many more containers than virtual machines on one server -- has been a critical feature for hosting providers in managing their razor-thin margins, he said.

    "We're hoping that the granular virtualization of containers will offer many other ways containers can be used," Bottomley said. "We're just on the edge of the container revolution."

  • What containers can't do

    Unlike a virtual machine, a container can't run different operating systems. For example, a VM can run Linux with Windows. In contrast, a Linux container is limited to running Linux and a Windows-based container is limited to an instance of Windows. This is an important distinction to keep in mind when considering use cases for containers.

  • Docker benefits

    Docker Inc., the commercial entity that manages the Docker open source project and the ecosystem around it, boasts on its website that both developers and sysadmins like Docker: "Docker enables apps to be quickly assembled from components and eliminates friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud."

    The streamlined way Docker works makes it really easy for users to modify, consume and share images, Lowe said. For example, if a user modifies a base Ubuntu image, the new image only contains the changes that make it different. That means less content to distribute in order to share an image with others. Bottomley said Docker is a good technology for an enterprise that wants to throw an application into a testing environment before deploying it in the cloud.

  • Docker challenges

    Monitoring, logging, networking, service discovery, and managing Docker volumes are challenges that come with using Docker, Lowe said. People are working on many of these issues, but they remain challenges that enterprises should expect, he said. And because Docker enables single-process containers, it's best suited for new applications rather than as "magic fairy dust" over existing enterprise applications. "It's extremely unlikely that you could take a traditional enterprise application -- even if it's designed to run on Linux -- and decompose it into a set of Docker containers," Lowe said.
    Image: David Morris

  • Container security

    Weak security is a common criticism of container technology in general, but one that Bottomley refutes. Many of the security issues stem from the way a container is set up rather than the technology itself, he said. For example, security problems can crop up by not having a correct match between the base kernel and the container tools. "Containers being a granular technology means if you don't set the container up right, you can get huge leaks in the container itself," he said.