Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Biggest Storage Trend of 2012

In our last few entries we discussed various storage trends to watch for in 2012. One of the most pressing trends this year just might end up being the biggest: performance management. The ability to manage and direct applications to the storage infrastructure and to the right storage medium might be the most important software tool to acquire--and the most critical IT skill to develop--this year.

Performance management is being driven on two fronts. First, there's a need for more speed because of server and desktop virtualization, as well as the increasing importance of mission-critical databases. Second, thanks to solid-state disk (SSD) and higher-speed networks, there is the ability to deliver that speed.

What is missing is an understanding of which application or virtual machine qualifies for the highest level of bandwidth and the highest level of performing storage. The IT skill set involved to diagnose and address these problems needs to be developed this year. At the same time that IT personnel are learning the skills, there is a desperate need for the tools needed to provide this information to administrators.

As we discussed in What is a Virtualization Performance Specialist?, these tools need to be able to provide rapid heads up analysis and close to, if not actual, real-time monitoring of the environment. They also need to be able to provide insight into specific virtual machines as well as a holistic view of the virtual infrastructure so that proper balancing of performance-sensitive virtual machines with less performance-critical virtual machines can be made.

These tools also must look outside the virtual environment and into the application environment. Many business-critical and ultra-performance-sensitive applications are yet to be virtualized and in some cases might never be. Today the performance specialist might need to be able to manage multiple tools that can provide performance analysis of the entire environment. In the future, tools that manage application performance, virtualization performance, and storage infrastructure performance should all merge into a single application or suite with a single interface.

The alternative to developing a performance-management practice is to choose storage systems and infrastructures that can meet any performance demand. In other words, just make the whole environment fast. Although this might not be the most efficient way to optimize and manage performance, it does fit the traditional IT model of throwing hardware at the problem.

I am not against throwing hardware at the problem--if we can prove that it is more cost effective than aggressively managing the problem. SSD-only storage systems and high-speed 10GB networks are coming within the price point of many data centers. If the tools can't be developed, then it might be easier--and even less expensive--to buy storage that's fast enough for the entire environment and not spend the time and effort to fine tune performance.

Due to flat IT budgets, many IT departments will be forced to get the most out of what they have, with maybe a small performance fix for certain situations. This is where the application of the right tools along with the right IT knowledge is critical in getting the maximum performance out of the environment. Virtualization in many ways has taken away the "headroom" that IT used to count on to handle sudden surges. In 2012 the job will fall on the performance management specialist to make sure that sudden performance peaks don't shut down critical applications.

Track us on Twitter


George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.