Cloud Infrastructure

08:00 AM
Scott S. Lowe
Scott S. Lowe
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Virtual Machines Vs. Containers: A Matter Of Scope

Should container technologies like Docker replace virtual machines in cloud environments? It depends on the application, operating system, and OS instance.

Unless you've been hiding under a rock for the last year or so, you've no doubt witnessed the tremendous surge of interest in the use of various container technologies. One container technology in particular -- a project called Docker -- has received enormous attention over the last few months.

The comments I've heard about Docker have been almost unanimously positive, with some folks even suggesting that organizations building out cloud computing environments should abandon virtual machines (VMs) for containers due to their lower overhead and potentially better performance.

Amidst the unabashed love-fest surrounding Docker, I was encouraged to read a recent article from Rob Hirschfeld, a well known figure within the OpenStack community. Rob put into words the very thoughts I was having: Is Docker hype, or is it real?

Asking this question led me to ask other questions. In particular, is abandoning full-machine virtualization for containers a real possibility? Is this a move that cloud architects should truly be considering?

Now, you might think I'm getting ready to bash containers. You couldn't be further from the truth. Container technologies, such as Docker and LXC, are incredibly useful additions to a cloud architect's toolkit. However, I think it's also important that we take a realistic look at these technologies and understand that there are places where containers make sense -- and places where they don't.

After thinking on this for a while, especially as it relates to Docker, I've arrived at a way of determining where containers make sense. It comes down to scope.

You see, a Docker container is intended to run a single application. You don't spin up general-purpose Docker containers; you create highly specific Docker containers intended to run MySQL, Nginx, Redis, or some other application.

So what happens if you need to run two distinct applications or services in a Docker environment? Well, you can run them together in a single container (typically via something like supervisord), but usually the recommendation is to use two separate containers. Docker's low overhead and quick startup times make running multiple containers less tedious, but hopefully it's clear that Docker has a specific scope. It's scoped to an application.

This isn't necessarily true for all containers. LXC -- upon which Docker was based before it switched to a new back-end engine named libcontainer -- isn't scoped to a specific application. Rather, LXC is scoped to an instance of Linux. It might be a different flavor of Linux (a CentOS container on an Ubuntu host, for example), but it's still Linux. Similarly, Windows-based container technologies are scoped to an instance of Windows.

A VM, on the other hand, has an even broader scope. It is scoped to any supported operating system. When using full-machine virtualization such as that provided by KVM, Xen, vSphere, or Hyper-V, you aren't limited to only Linux or only Windows; you can run pretty much any recent operating system release.

Why is this important? Understanding the scope of these various technologies -- Docker, LXC, and full-machine virtualization -- helps us understand where it makes sense to put them to use. It also helps cloud architects understand that they are each suited to handle certain workloads and use cases. The best approach, therefore, isn't necessarily to advocate for abandoning VMs. Rather, it's to advocate in favor of -- and design for -- the appropriate use of containers in addition to VMs when appropriate.

What do you think? Feel free to speak up in the comments. I'd love to hear your thoughts!

Scott Lowe is a blogger, speaker, best-selling author, and IT industry veteran. He focuses his time and efforts on open source, cloud computing, virtualization, networking, and related data center technologies. Scott currently works for VMware, Inc., in the NSX group, but the ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 2 / 2
David Klebanov
50%
50%
David Klebanov,
User Rank: Apprentice
5/28/2014 | 2:51:37 PM
Ubiquitous infrastructure support
Hi Scott,

Thank you for a thought provoking article. I agree with you that full instatiation of virtual resources in the form of Virtual Machine and its scheduling mechanisms for the underlying server physical compoments, definitely still holds true on many ocasions. At the same time, the appeal of containers with their increased scale and decreased underlying server resource demands, is gaining lots of traction. 

In your mind, how would you guide customers making strategic investment decisions in the infrastructure components (compute, services, storage, network, orchestration etc...) today, so their investments are protected tomorrow should they make a dicision to pursue containers vs VMs?

Thank you, David

@davidklebanov
<<   <   Page 2 / 2
Cartoon
Slideshows
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Twitter Feed