Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Containers: What IT Pros Should Consider

All the talk of the cloud, mobility, and big data is finally taking a back seat. The buzz now is around the Internet of Things, software-defined networks, hyperconvergence, and a disruptive back-end technology: Containers.  

Containerization holds  great promise for dramatic improvements in back-end flexibility and scalability, along with cost savings. IT pros need to have an opinion on the emerging technology and be prepared for it in their environments. Any new infrastructure investments must take containerization into account.

Think of containers as a wrapper for applications and workload portability, promising high applications-to-hardware ratios, with assured performance and easy upkeep. The idea is not new; Google has used containers for its internal systems for a decade. But the resurgence -- if that is what we should call it -- is because of Docker Inc., the original author and sponsor of the open source project to make building and shipping software applications easier.

Docker's success in advancing the technology was illustrated last week, when the company was joined by a legion of supporters on the Open Container Project. Supporters of the standards effort include industry giants Amazon Web Services, Cisco, EMC, Google, HP, IBM, Microsoft, and VMware among others.

The containers concept is exciting because you package your application with all of its dependencies -- code, libraries, configurations, system tools (anything you would have installed on your local server’s OS for that application) -- into a wrapper file system. Because everything is there, a containered application should run as well as it would natively. The open source Docker toolkit makes it easy to develop applications using microservices, and also to make changes and updates within a container without impacting adjacent containers.

With worry about conflicts and environmental mistakes erased, the result is a double win. First, since more applications can sit within containers on the same physical hardware, there is a dramatic cost savings: Four to six times more server applications can reside on the same hardware compared to the number of VMs. Second, the ready-to-run applications are easier to develop, modify and test. Docker claims that teams using containers ship products seven times more frequently than do traditional software development teams. Development environments can be set up exactly like a live server and projects can be easily tested on new or different servers.

Figure 1:

For IT departments, an obvious question to ask is: How do containers compare to virtual machines (VMs)? Virtualization offers workload isolation through hardware abstraction. Wherever the workload is, it talks through the hypervisor to that hardware. Container isolation is more risky, in that containers move through server memory and there are questions about whether communications might inadvertently be exposed to other containers. Also, the management and assessment tools are not as fully developed as they are for VMs.

Hypervisors, on the other hand, have other constraints such as being vendor specific -- VMware vs. Microsoft, as an example. With containers, the expectation is that they will operate seamlessly on any cloud without needing changes.

Containers can be run within VMs, but then you lose their high-density advantage. This is because a VM needs to have the full hardware stack as well as OS within it. Containers are much thinner by comparison. The full server hardware stack is not abstracted and abstraction is at the OS kernel level.

While containers offer speed and agility, they don't have the flexibility VMs offer. VMs carry within them their own “guest OS” needed by an application for that application to run.  So a VM that has Linux in it can sit next to a VM that has Windows and both VMs can then, say, sit on a hardware host box running Windows. You can’t do this with containers. A Windows container cannot sit on a Linux host or vice versa.  Containers on a host must all share the OS kernel of the host. 

For application development and service, Docker Inc. is offering a range of plans, including a free-trial package with one Docker Trusted Registry for Server and 10 Docker engines, and a Starter Edition for $150 per month. In addition, Amazon Web Services, Microsoft and IBM are all Docker authorized service provider partners, and offer commercial Docker products.

Newer infrastructure technologies like containers call for both education and prudent risk-taking. IT must lead businesses forward into the future rather than wait for it. Find the right business case and test the waters. To begin the adventure, go the Docker site and select the “get started” section that applies to you -- Mac OS, Linux or Windows.  In less than an hour you will learn how to install Docker, run a software image in a container, learn about Docker Hubs, create your own image to run in a container and push the image to Docker Hub for others to use.