Cloud Infrastructure

08:00 AM
Scott S. Lowe
Scott S. Lowe
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Virtual Machines Vs. Containers: A Matter Of Scope

Should container technologies like Docker replace virtual machines in cloud environments? It depends on the application, operating system, and OS instance.

Unless you've been hiding under a rock for the last year or so, you've no doubt witnessed the tremendous surge of interest in the use of various container technologies. One container technology in particular -- a project called Docker -- has received enormous attention over the last few months.

The comments I've heard about Docker have been almost unanimously positive, with some folks even suggesting that organizations building out cloud computing environments should abandon virtual machines (VMs) for containers due to their lower overhead and potentially better performance.

Amidst the unabashed love-fest surrounding Docker, I was encouraged to read a recent article from Rob Hirschfeld, a well known figure within the OpenStack community. Rob put into words the very thoughts I was having: Is Docker hype, or is it real?

Asking this question led me to ask other questions. In particular, is abandoning full-machine virtualization for containers a real possibility? Is this a move that cloud architects should truly be considering?

Now, you might think I'm getting ready to bash containers. You couldn't be further from the truth. Container technologies, such as Docker and LXC, are incredibly useful additions to a cloud architect's toolkit. However, I think it's also important that we take a realistic look at these technologies and understand that there are places where containers make sense -- and places where they don't.

After thinking on this for a while, especially as it relates to Docker, I've arrived at a way of determining where containers make sense. It comes down to scope.

You see, a Docker container is intended to run a single application. You don't spin up general-purpose Docker containers; you create highly specific Docker containers intended to run MySQL, Nginx, Redis, or some other application.

So what happens if you need to run two distinct applications or services in a Docker environment? Well, you can run them together in a single container (typically via something like supervisord), but usually the recommendation is to use two separate containers. Docker's low overhead and quick startup times make running multiple containers less tedious, but hopefully it's clear that Docker has a specific scope. It's scoped to an application.

This isn't necessarily true for all containers. LXC -- upon which Docker was based before it switched to a new back-end engine named libcontainer -- isn't scoped to a specific application. Rather, LXC is scoped to an instance of Linux. It might be a different flavor of Linux (a CentOS container on an Ubuntu host, for example), but it's still Linux. Similarly, Windows-based container technologies are scoped to an instance of Windows.

A VM, on the other hand, has an even broader scope. It is scoped to any supported operating system. When using full-machine virtualization such as that provided by KVM, Xen, vSphere, or Hyper-V, you aren't limited to only Linux or only Windows; you can run pretty much any recent operating system release.

Why is this important? Understanding the scope of these various technologies -- Docker, LXC, and full-machine virtualization -- helps us understand where it makes sense to put them to use. It also helps cloud architects understand that they are each suited to handle certain workloads and use cases. The best approach, therefore, isn't necessarily to advocate for abandoning VMs. Rather, it's to advocate in favor of -- and design for -- the appropriate use of containers in addition to VMs when appropriate.

What do you think? Feel free to speak up in the comments. I'd love to hear your thoughts!

Scott Lowe is a blogger, speaker, best-selling author, and IT industry veteran. He focuses his time and efforts on open source, cloud computing, virtualization, networking, and related data center technologies. Scott currently works for VMware, Inc., in the NSX group, but the ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Page 1 / 2   >   >>
David Klebanov
50%
50%
David Klebanov,
User Rank: Apprentice
5/28/2014 | 2:51:37 PM
Ubiquitous infrastructure support
Hi Scott,

Thank you for a thought provoking article. I agree with you that full instatiation of virtual resources in the form of Virtual Machine and its scheduling mechanisms for the underlying server physical compoments, definitely still holds true on many ocasions. At the same time, the appeal of containers with their increased scale and decreased underlying server resource demands, is gaining lots of traction. 

In your mind, how would you guide customers making strategic investment decisions in the infrastructure components (compute, services, storage, network, orchestration etc...) today, so their investments are protected tomorrow should they make a dicision to pursue containers vs VMs?

Thank you, David

@davidklebanov
Brian.Dean
100%
0%
Brian.Dean,
User Rank: Ninja
5/28/2014 | 8:48:00 PM
Re: Ubiquitous infrastructure support
Containers are good, because they create an environment that makes a distinction between read and write memory. This is useful if for example, a 1,000 copies of a 5GB image needed to be ran because, the space requirements could still only be 5GB (this would also be nice if the image needed to be in-memory), on the other hand, VMs would require 1,000 X 5GBs of space. However, they would be instances where space is of little concern and VMs are given preference, and if this is not the case at present then maybe some future requirement makes it the case -- I guess, the best security would be flexibility.
ReturnoftheMus
50%
50%
ReturnoftheMus,
User Rank: Moderator
5/29/2014 | 6:56:27 AM
Point of Correction: docker is not Containers
docker is itself an application, which uses containers to create ligtweight packages for applications with instant portability, hence a consumer of container technology.
scottslowe
100%
0%
scottslowe,
User Rank: Apprentice
5/29/2014 | 11:19:36 AM
Re: Point of Correction: docker is not Containers
ReturnoftheMus, technically you are correct. Docker itself, consisting of both a daemon as well as some userspace tools, is not a container. However, given that Docker is expressed designed to work with container technologies in the Linux kernel (cgroups, namespaces, etc.), I feel it is reasonable to refer to Docker as a container technology. This simplifies the discussion when talking about Docker and allows the conversation to move forward to how we can use Docker to support applications and operations in modern data centers. Thanks for your comment!
scottslowe
50%
50%
scottslowe,
User Rank: Apprentice
5/29/2014 | 11:26:21 AM
Re: Ubiquitous infrastructure support
Brian, you are correct that some container technologies (Docker is one of them) use techniques like layered file systems to dramatically reduce both the space required for the containers as well as the time required to launch a container. However, not all container technologies do this. Using "straight" LXC, for example, won't gain you the benefits of a layered file system and the reductions in space requirements that result from it. Further, the rise of high-performance inline deduplication technology is making the formerly onerous space requirements for VMs far more palatable. This is why both VMs and containers have a place in the toolbelt of a modern cloud architect; it allows cloud architects to use the right tool for the job at hand. Thanks for commenting!
scottslowe
50%
50%
scottslowe,
User Rank: Apprentice
5/29/2014 | 11:34:43 AM
Re: Ubiquitous infrastructure support
Hi David, thanks for taking the time to comment. I don't know that there are any "hard and fast" rules for when customers should deploy VMs versus deploying containers. There are a number of factors that should be considered, though, including application support (most container technologies are strongly focused on Linux and Linux applications), operational readiness (staff training to implement and support containers, tools must support containers, organizational readiness to support open source projects, support, etc.), and other requirements (security might be one; I don't know that containers have been as fully vetted as VMs from a security perspective). Further, as organizations move more heavily into private cloud deployments using cloud management platforms (CMPs) such as OpenStack, CloudStack, vCAC, OpenNebula, Eucalyptus, and others, the ability/readiness of the CMP to support containers is another question customers must answer. What of scale? Does the customer really need the enhanced scalability that containers can offer? Customers must evaluate all these factors before making a decision. I would stress again that this is not an "either/or" situation, but rather an "and" decision. There's no reason customers can't deploy both VMs and containers, as best suits each situation.
jgherbert
50%
50%
jgherbert,
User Rank: Ninja
5/29/2014 | 10:31:46 PM
Re: Ubiquitous infrastructure support
Thanks for the clear post on this topic, Scott. This is definitely something that has been cropping up, and an area where I'm interested to learn more, so this brief overview is very well timed for me.


The security aspect you mention is going to be a key concern for many users I suspect, so that vetting process is going to be key if we'll see adoption similar to VMs. Good stuff.

 
ReturnoftheMus
50%
50%
ReturnoftheMus,
User Rank: Moderator
5/30/2014 | 9:11:05 AM
Re: Point of Correction: docker is not Containers
There is no doubt docker has emerged as a leading light in the promotion of container technology for the enterprise , however I felt the title of your blog post raised a somewhat premature debate. As we know some of the world’s major CSPs favour containers over VMs, especially at the PaaS and SaaS layers enabling them to get the economies of scale that they otherwise wouldn’t have. I’d also stress that even-though container unification was started back in 2011, it’s was only in 2013 that we got that unification in the 3.12 release, which is way beyond where most enterprise kernels sit today. Security has largely been addressed with the introduction of the user namespace; however the distros have only just started enabling it. My overall point is now that we have this unification, this opens the door to many more docker type applications, with endless possibilities.
scottslowe
50%
50%
scottslowe,
User Rank: Apprentice
5/30/2014 | 9:27:26 AM
Re: Point of Correction: docker is not Containers
ReturnoftheMus, I appreciate the continued discussion. As container technologies continue to mature (Docker is still pre-1.0, for example, and you mentioned the relative newness of some of the kernel features that support containers), the use cases for containers will undoubtedly increase. On that point I think you and I both agree. As you rightly point out, containers are already becoming a key part of PaaS platforms (the nascent Solum project in OpenStack, for example, will heavily leverage Docker containers). As for whether the VMs vs. containers debate is premature, I can honestly say that the amount of interest I've seen replacing VMs with containers justifies the article. Personally, I think a lot of it is hype, and what I wanted to do with this article was bring the discussion and the focus back to reality. Let's focus on where it makes sense to use containers, and continue using mature VM technologies and infrastructures where that makes sense for our businesses. Thanks!
francescovigo
50%
50%
francescovigo,
User Rank: Apprentice
6/3/2014 | 2:40:09 PM
Licensing and Support

Hi Scott,

Thanks for sharing your vision on this topic.

I am not an expert but I think that licensing considerations will influence this debate as well: many technologies, including hypervisor addons and OSs are licensed per-VM nowadays. Being able to bypass the number of VM limits by spinning up lots of containers will probably make some organizations choose Docker or other container technologies as a way to reduce licensing costs. Until vendors will modify their EULAs and licensing models to account it: but will it be easy enough to meter the number of running containers in an organization? Maybe, in the future, licensing models based on actual resource usage, even for private clouds, could emerge again to handle these scenarios?

Another bullet is on support: if you're running a SLES container within a RHEL VM running in vSphere (don't know if it's possible but let's assume so), who is going to help you when something goes wrong?


Thanks,


Francesco

 

Page 1 / 2   >   >>
Hot Topics
11
Fall IT Events: On The Road Again With 10 Top Picks
James M. Connolly, Editor in Chief, The Enterprise Cloud Site,  7/29/2014
3
25 GbE: A Big Deal That Will Arrive
Greg Ferro, Network Architect & Blogger,  7/29/2014
3
Cisco Ships APIC Controller, Reveals ACI Pricing
Marcia Savage, Managing Editor, Network Computing,  7/29/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Video
Slideshows
Twitter Feed