Challenges Facing The Software-Defined Data Center

Virtualization has disrupted the data center, but IT must overcome several hurdles before it can move to the next level.

Jesse St. Laurent

December 30, 2014

5 Min Read
Network Computing logo

Innovation often comes with a price. Introduce a solution to improve on the present mode of operation, and it just might break a few things along the way. Our transition from horse and buggy to the mass-produced automobile meant that we could go farther faster. However, dependence on this new mode of transportation introduced new challenges, including the availability and quality of roads, the ready availability of fuel and auto mechanics, and the creation of noise and air pollution. It spawned new markets, in many cases.

Virtualization has similarly disrupted today's data centers. The improvements and benefits that virtualization has introduced, including the dramatic improvements in cost, agility, business efficiency, and business continuity, have won us over. However, new data center challenges have resulted, prompting IT organizations to better align data center architecture with the needs of virtualized workloads.

Virtualization abstracts the hardware components of the data center and overlays them with a common software layer. Now that the virtualization layer is managing the underlying hardware, actions can be controlled via software. When data center services are virtualized, it eliminates islands of CPU, memory, storage, and networking resources that are typically housed in a single-purpose device. A software-defined data center enables service delivery and automated management via software.

So what's the problem? Many IT organizations have only partially delivered on virtualization to achieve a software-defined data center. Siloed physical assets -- including storage and data management appliances -- are hampering progress. The limited scope of infrastructure components virtualized thus far (in most cases, server virtualization remains dependent on specialized hardware systems that are not horizontally scalable) results in greater complexity and cost, and scale only exacerbates the problem.

Infrastructure innovation
Over time, the proliferation of purpose-built devices has created unnecessary complexity, and this has resulted in data center chaos. Innovation at different stages and by different vendors resulted in a layering of technologies that, though they interoperate, are often extremely inefficient.

One example is backup to disk. Companies invest significant capital expense in backup hardware, including backup servers, disk storage, deduplication appliances, and WAN optimization, oftentimes in both the primary data center and the remote disaster recovery site. When there is no backup running, the CPU and memory are underutilized across these many specialized systems and devices.

Another example of this issue is capacity efficiency. Over the past decade, IT departments have focused on solving this problem by deploying all kinds of technologies, such as WAN optimization and backup deduplication appliances. As a result, data efficiency technologies have become standard features of many different products.

When all these products are put together in the data center, IT departments end up processing the same data again and again as it passes through each device. This process is complex and costly, requiring multiple management touchpoints. The huge amount of required resources undermines the goals of virtualization.

Underused resources
Before virtualization, it was common for server utilization to average under 10%. Virtualization has pushed the average much higher. IT departments now need to maintain separate groups of people to manage separate resources, such as servers, storage, networking, and end user computing.

Emerging workloads are creating resource challenges that push IT departments to develop infrastructure environments on a per-service basis. VDI environments create different resource usage patterns from server virtualization projects. To take care of this, IT professionals often implement completely separate environments, from servers down to storage, to meet user expectations.

Deployment difficulty and delays
Resource challenges are the No. 1 reason organizations continue to have problems deploying new applications and services, followed by administrative overhead. An example is allocating storage resources to run applications reliably. Many VMs run on a single LUN and create challenging IO loads for storage systems.

The term "IO blender" is used to describe this situation, where multiple workloads with different IO streams being multiplexed by the hypervisor results in random IO streams competing for resources, increasing the IOPS required to service the virtual workloads. To solve this challenge, IT often overprovisions storage, or flash/SSD storage is used in place of spinning disk to improve performance, leading to a higher cost per GB of storage allocated per VM.

Mobility and management
VMs are portable, but their range of portability is often constrained by their association with physical storage. VMs are tied to a datastore in the virtualization domain -- which is tied to storage. Siloed physical storage is often managed at the element level, including LUNs, volumes, RAID groups, and physical disks.

Policies are also configured at the element level, which means that they cannot be specified for individual VMs, but rather are defined for the storage element where many VMs reside. For the mobility and management needed in a software-defined data center, there needs to be a top-down approach: establishing policies at and managing from the virtual machine and workload level.

Policy misalignment
In addition to performance challenges of the post-virtualization world, virtualized organizations face challenges in both the physical and virtual worlds. Physical servers have a direct mapping from application to server, to storage array, to LUN, and to storage policy. This approach makes storage upgrades so complex. For example, a replication policy is applied to a LUN in storage array X at IP address Y and tells that LUN to replicate to storage array A at IP address B.

In the virtualized world, there are many applications on a host and many hosts using a single LUN. It isn't efficient to apply a policy to a single LUN. Instead, applying backup and replication policies directly to individual applications (or VMs) would be better aligned with managing a virtual environment. Replication policies specify a destination -- in this case, a data center -- that is abstracted away from the infrastructure. This allows an administrator to update the infrastructure in one data center without policy configuration or data migration, which increases efficiency and decreases risk.

Organizational misalignment
IT organizations typically align resource structures and skill sets. The software-defined data center should eliminate the need for certain manual activities to be executed by data center personnel. The abstraction layer should mask much of the complexity related to hardware resources.

IT organizations need to shift focus. Rather than having staff with deep subject matter expertise per hardware resource silo, broader knowledge to manage applications and the virtualization environment is needed.

Despite the many challenges, IT professionals should not avoid implementing a virtualized environment. However, consideration must be given to the virtualization architecture to sustain efficiency and actually reap the benefits of virtualization. The next article in this series will delve into business requirements affecting IT.

About the Author(s)

Jesse St. Laurent

Vice President of Product Strategy, SimpliVity

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights