Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Virtual Machines Vs. Containers: A Matter Of Scope

Unless you've been hiding under a rock for the last year or so, you've no doubt witnessed the tremendous surge of interest in the use of various container technologies. One container technology in particular -- a project called Docker -- has received enormous attention over the last few months.

The comments I've heard about Docker have been almost unanimously positive, with some folks even suggesting that organizations building out cloud computing environments should abandon virtual machines (VMs) for containers due to their lower overhead and potentially better performance.

Amidst the unabashed love-fest surrounding Docker, I was encouraged to read a recent article from Rob Hirschfeld, a well known figure within the OpenStack community. Rob put into words the very thoughts I was having: Is Docker hype, or is it real?

Asking this question led me to ask other questions. In particular, is abandoning full-machine virtualization for containers a real possibility? Is this a move that cloud architects should truly be considering?

Now, you might think I'm getting ready to bash containers. You couldn't be further from the truth. Container technologies, such as Docker and LXC, are incredibly useful additions to a cloud architect's toolkit. However, I think it's also important that we take a realistic look at these technologies and understand that there are places where containers make sense -- and places where they don't.

After thinking on this for a while, especially as it relates to Docker, I've arrived at a way of determining where containers make sense. It comes down to scope.

You see, a Docker container is intended to run a single application. You don't spin up general-purpose Docker containers; you create highly specific Docker containers intended to run MySQL, Nginx, Redis, or some other application.

So what happens if you need to run two distinct applications or services in a Docker environment? Well, you can run them together in a single container (typically via something like supervisord), but usually the recommendation is to use two separate containers. Docker's low overhead and quick startup times make running multiple containers less tedious, but hopefully it's clear that Docker has a specific scope. It's scoped to an application.

This isn't necessarily true for all containers. LXC -- upon which Docker was based before it switched to a new back-end engine named libcontainer -- isn't scoped to a specific application. Rather, LXC is scoped to an instance of Linux. It might be a different flavor of Linux (a CentOS container on an Ubuntu host, for example), but it's still Linux. Similarly, Windows-based container technologies are scoped to an instance of Windows.

A VM, on the other hand, has an even broader scope. It is scoped to any supported operating system. When using full-machine virtualization such as that provided by KVM, Xen, vSphere, or Hyper-V, you aren't limited to only Linux or only Windows; you can run pretty much any recent operating system release.

Why is this important? Understanding the scope of these various technologies -- Docker, LXC, and full-machine virtualization -- helps us understand where it makes sense to put them to use. It also helps cloud architects understand that they are each suited to handle certain workloads and use cases. The best approach, therefore, isn't necessarily to advocate for abandoning VMs. Rather, it's to advocate in favor of -- and design for -- the appropriate use of containers in addition to VMs when appropriate.

What do you think? Feel free to speak up in the comments. I'd love to hear your thoughts!