Server Virtualization Can Save Costs

But watch out and avoid spending a dollar to save a penny in the quest to optimize

April 14, 2009

7 Min Read
Network Computing logo

Part seven in a series. Greg Schulz is the founder of StorageIO and the author of The Green and Virtual Data Center.

Computers -- also known as blade centers or blade servers, desktops, laptops, mainframes, personal computers, processors, servers, and workstations -- are a key component and resource of any data center or information factory. Computers run or execute the program software that performs various application and business functions. Hardware does not function without some form of software -- microcode, firmware, operating, or application software -- and software does not function without hardware. The next truly revolutionary technology, in my opinion, will be software that does not require hardware and hardware that does not require software. Until then, it is important to understand the relationship between the need for more processing or compute capability and the subsequent energy demands and need to improve energy efficiency to reduce power and cooling costs and their environmental impact.

In the typical data center, computers are the second largest consumer of electrical power, after cooling. In addition to requiring power, cooling, and floor space, computers have an environmental health and safety footprint in the form of electronic circuit boards, battery-backed internal power supplies, and other potentially hazardous substances that need to be recycled and disposed of properly. The larger the computer, the more components it will have; smaller computers, such as laptop or desktop workstations, have fewer and smaller components, although they do have batteries and monitors or screens.

Servers consume different amounts of electrical power and generate various amounts of heat depending on their operating mode -- for example, zero power consumption when powered off but high power consumption during startup. Servers use different amounts of energy when running with active or busy workloads and less power used during low-power, sleep, or standby modes. With the need for faster processors to do more work in less time, there is a corresponding effort by manufacturers to enable processing or computer chips to do more work per watt of energy as well as reduce the overall amount of energy consumed.

In addition to boosting energy efficiency by doing more work per watt of energy consumed, computer chips also support various energy-saving modes such as the ability to slow down and use less energy when there is less work to be done. Other approaches for reducing energy consumption include adaptive power management, intelligent power management, adaptive voltage scaling, and dynamic bandwidth switching, techniques that are focused on varying the amount of energy used by varying the performance level. As a generic example, a server's processor chip might require 1.4 volts at 3.6 GHz for high performance, 1.3 volts at 3.2 GHz for medium performance, or 1.2 volts at 2.8 GHz for low performance and energy savings. Intel SpeedStep and other energy efficiency and performance enhancements technology, such as those found in the new Intel Nelham-based processor chips, reduce energy when higher performance is not needed, but boost performance to be more efficient when work needs to be done.Another approach being researched and deployed to address power, cooling, floor space, and environmental health and safety (PCFE) issues at the server chassis, board, or chip level includes improved direct and indirect cooling. Traditionally, air has been used for cooling and removing heat, with fans installed inside server chassis, cabinets, and power supplies or even mounted directly on high-energy-consuming and high-heat-generating components. Liquid cooling has also been used, particularly for large mainframes and supercomputers. With the advent of improved cooling fans, moderately powered servers and relatively efficient ambient air cooling, liquid-based cooling was not often considered for servers until recently.

A variation of the tin-wrapped software model is the software-wrapped appliance or virtual appliance. In this case, vendors use a virtual machine (VM) to host their software on a physical server or appliance that is also being used for other functions. For example, database vendors or virtual tape library software vendors might install their solution into separate VMs on a physical server with applications running in other VMs or partitions. This approach can be used to consolidate underutilized servers, but caution should be exercised to avoid over consolidation and oversubscription of available physical hardware resources, particularly for time-sensitive applications.

Disk storage solutions like hard drives are slower but lower in cost than RAM-based memories. They are also persistent; that is, as noted earlier, data is retained on the device even when power is removed. From a PCFE perspective, balancing memory and storage performance, availability, capacity, and energy to a given function, quality of service, and service-level objective for a given cost needs to be kept in perspective. It is not enough to consider only the lowest cost for the most amount of memory or storage. In addition to capacity, other storage-related metrics that should be considered include percent utilization, operating system page faults and page read/write operations, memory swap activity, and memory errors.

There are many questions that should be considered before deploying server virtualization: What are application requirements and needs (performance, availability, capacity, costs)? What servers can and cannot be consolidated? Will a solution enable simplified software management or hardware management? Will a solution be workable for enabling dynamic application and resource management? How will the technology work with existing and other new technologies? How will scaling performance, capacity, availability, and energy needs be addressed? Who will deploy, maintain, and manage the solution? Will vendor lock-in be shifted from hardware to a software vendor? How will different hardware architectures and generations of equipment coexist? How will the solution scale with stability?

A common use for server virtualization is consolidation of underutilized servers to lower hardware and associated management costs and energy consumption along with cooling requirements. Various approaches to consolidation are possible. For example, a server's operating system and applications can be migrated as a guest to a VM existing in a virtualization infrastructure. The VMs can exist and run on a virtualization infrastructure, such as a hypervisor, that runs bare metal, or natively on a given hardware architecture, or as a guest application on top of another operating system. Depending on the implementation, different types of operating systems can exist as guests on the VMs -- for example, Linux, UNIX, and Microsoft Windows all coexisting on the same server at the same time, each in its own guest VM.Consolidation of underutilized servers can address PCFE issues and reduce hardware costs. However, server virtualization does not by itself address operating system and application consolation and associated cost savings. If cost savings are a key objective, in addition to reducing hardware costs, consider how software costs, including licenses and maintenance fees, can be reduced or shifted to boost savings. In the near term, there is a large market opportunity for server consolidation and an even larger market opportunity for virtualization of servers to enable scaling and transparent management on a longer-term basis.

Action and takeaway points include:

  • Use caution with consolation to avoid introducing performance or availability problems.

  • Look into how virtualization can be used to boost productivity and support scaling.

  • Explore new technologies that support energy efficiency and boost productivity.

  • Understand data protection and management issues pertaining to virtualization.

  • Server consolidation addresses hardware costs; consider software costs separately.

  • Blade servers can be used for consolidation as well as enabling scaling.

  • Not all servers can be simply powered down, and not all redundancy can be simply removed.

  • Some servers can run part of the time with lower performance and energy consumption.

  • Investigate intelligent and dynamic cooling, including cooling closer to heat sources.

    Until the next truly revolutionary technology appears, which will be hardware that does not need software and software that does not need physical hardware, applications and virtual servers will continue to rely on physical hardware, which consumes electricity and generates heat. Watch out for having to spend a dollar to save a penny in the quest to optimize.

    Greg Schulz is the founder of StorageIO, an IT industry research and consulting firm. He has worked as a programmer, systems administrator, disaster recovery consultant, and capacity planner for various IT organizations, and also has held positions with industry vendors. He is author of the new book "The Green and Virtual Data Center (CRC) and of the SNIA-endorsed book "Resilient Storage Networks (Elsevier)".InformationWeek has published an in-depth report on data center unification. Download the report here (registration required).

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights