Special Coverage Series

Network Computing

Special Coverage Series


4 VDI Planning Tips

Make sure you understand the true costs and network requirements of virtual desktop infrastructure before making the leap to VDI, experts say.

Virtual desktop infrastructure (VDI) promises flexibility, ease of management and improved security, but organizations shouldn't rush to jump on the VDI train. Missteps can lead to cost overruns and performance headaches, making it critical IT teams consider a number of factors before diving into a VDI deployment.

Here are four steps to ensure a successful VDI implementation:

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

1. Determine Need

"The first thing is to have a defined reason for making a move to virtualization," says Chip Timm, president at IT service provider TR Technologies. "Once that’s determined--be it eliminating energy usage, consolidation of systems, security, more consistent systems management or some other reason--it then needs to be determined if VDI is a feasible option over a terminal server solution or simply upgrading existing infrastructure."

That includes knowing what the line-of-business applications are and what desktop virtualization methods are supported, as well as user storage requirements and if there is a need to support a bring-your-own device scenario, he says.

To Bill Cassidy, CTO of data center technology consultant IT Partners, there are a few critical reasons for organizations consider desktop virtualization.

"In general," Cassidy says, "we encourage customers to look at VDI technologies when one or more of the following conditions are present: planning a desktop OS upgrade (for example, Windows XP to Windows 7/8); a traditional desktop hardware refresh is upcoming or is past due; [or] other business drivers are posing challenges to the traditional desktop model. This could be anything from offshore development security concerns to PCI compliance to user mobility initiatives."

A non-valid reason would be because "everyone says, 'This is what we should do,'" he says.

2. Calculate Full Cost

Understanding the true cost of a VDI deployment is a common challenge organizations face, he adds, noting that far too often there is a belief that deploying VDI in lieu of a traditional desktop refresh must be less expensive. In many cases, this is due to a belief that a new VDI environment can be deployed alongside existing virtual server workloads--something that should be discouraged due to the differences in performance characteristics of desktop and server workloads.

Other times, failing to fully understand the costs of all products, licenses, training and other expenses associated with VDI projects leads to costs being underestimated, Cassidy says.

The cost of storage, particularly expensive SANs, has been the single largest reason enterprises have been sluggish in adopting VDI, says Shaun Coleman, VP of products at CloudVolumes. Shared storage or a SAN may actually not be needed; instead, organizations could use local attached SATA or PCIe SSDs in their hypervisor.

[Find out ways to provide a rich desktop experience but not get crushed by storage costs with your VDI rollout in "Solving VDI Problems with SSDs and Data Deduplication."]

"Use of non-persistent desktops and solutions … that allow you to have a single copy of an app shared across all users allows for the use of inexpensive high-speed direct attached SSD," he says. "With inexpensive high-speed local storage and server class CPUs, end users may in fact get better performance than they would have had on a physical desktop. Shared storage can then be used only for those user-specific things needing a highly reliable storage fabric, such as their documents and settings."

3. Understand Network Requirements

Many organizations make the mistake of "diving head first into VDI" without considering how users will access their VDI instances over the network, Coleman warns. This makes it very important for enterprises to size their networks correctly, he adds.

"Bandwidth is not the most important consideration; latency is the killer," he says. "Enterprises should size their network for VDI similar to how they have deployed VoIP, whereby the user’s phone was near to the PBX and had a relatively low latency connection (sub 100ms). Endpoints and VDI instances should be accessed, if at all possible, over a LAN and within a relatively short distance from the servers/hypervisors running the virtual machines."

4. Conduct Thorough Planning

As with most IT projects, a successful VDI deployment becomes more likely if adequate time is spent in the assessment, planning and design phases, notes Cassidy.

"Poor design can include mixing desktop and server workloads, storage designed without an understanding of desktop I/O requirements and behaviors, networking issues including LAN and WAN bandwidth constraints--the list is practically endless, and most [issues] truly cannot be mitigated without a proper VDI assessment," he says.

Beyond the challenges with the supporting infrastructure, organizations can run into problems if they don't understanding their complete desktop application catalog, he adds. Most midsize and larger organizations have dozens, if not hundreds, of applications on desktops, he says.

"Will all of these accept the virtual hardware platform presented by the VDI solution? Can they be packaged using an application virtualization or packaging tool, if that is also part of the VDI solution? Add to the mix applications with local PC hardware dependencies, USB or parallel port key dongles, innumerable varieties of local printers and other peripherals, and it can get ugly quickly," he says.

[Get insight into the impact of VDI on storage infrastructures and learn ways to tackle the problem in Howard Marks session "Storage Solutions For VDI" at Interop New York Sept. 30-Oct. 4. Register today!]



Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

2013 SDN Survey: Growing Pains

2013 SDN Survey: Growing Pains

In this report, we'll look at the state of Software-defined networking in the marketplace and examine the survey results. We'll also outline what a typical SDN infrastructure looks like in 2013, including a controller, programmable infrastructure, applications and network multitenancy. We'll dig into both OpenDaylight and OpenFlow, two open source initiatives that are shaping SDN from outside. Finally, We'll offer guidance for network professionals on how to approach bringing SDN into their own environments.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.