• 01/17/2012
    10:54 AM
  • Rating: 
    0 votes
    Vote up!
    Vote down!

Biggest Storage Trend of 2012

Performance management skills--in both software and IT professionals--promises to be the biggest storage story this year.
In our last few entries we discussed various storage trends to watch for in 2012. One of the most pressing trends this year just might end up being the biggest: performance management. The ability to manage and direct applications to the storage infrastructure and to the right storage medium might be the most important software tool to acquire--and the most critical IT skill to develop--this year.

Performance management is being driven on two fronts. First, there's a need for more speed because of server and desktop virtualization, as well as the increasing importance of mission-critical databases. Second, thanks to solid-state disk (SSD) and higher-speed networks, there is the ability to deliver that speed.

What is missing is an understanding of which application or virtual machine qualifies for the highest level of bandwidth and the highest level of performing storage. The IT skill set involved to diagnose and address these problems needs to be developed this year. At the same time that IT personnel are learning the skills, there is a desperate need for the tools needed to provide this information to administrators.

As we discussed in What is a Virtualization Performance Specialist?, these tools need to be able to provide rapid heads up analysis and close to, if not actual, real-time monitoring of the environment. They also need to be able to provide insight into specific virtual machines as well as a holistic view of the virtual infrastructure so that proper balancing of performance-sensitive virtual machines with less performance-critical virtual machines can be made.

These tools also must look outside the virtual environment and into the application environment. Many business-critical and ultra-performance-sensitive applications are yet to be virtualized and in some cases might never be. Today the performance specialist might need to be able to manage multiple tools that can provide performance analysis of the entire environment. In the future, tools that manage application performance, virtualization performance, and storage infrastructure performance should all merge into a single application or suite with a single interface.

The alternative to developing a performance-management practice is to choose storage systems and infrastructures that can meet any performance demand. In other words, just make the whole environment fast. Although this might not be the most efficient way to optimize and manage performance, it does fit the traditional IT model of throwing hardware at the problem.

I am not against throwing hardware at the problem--if we can prove that it is more cost effective than aggressively managing the problem. SSD-only storage systems and high-speed 10GB networks are coming within the price point of many data centers. If the tools can't be developed, then it might be easier--and even less expensive--to buy storage that's fast enough for the entire environment and not spend the time and effort to fine tune performance.

Due to flat IT budgets, many IT departments will be forced to get the most out of what they have, with maybe a small performance fix for certain situations. This is where the application of the right tools along with the right IT knowledge is critical in getting the maximum performance out of the environment. Virtualization in many ways has taken away the "headroom" that IT used to count on to handle sudden surges. In 2012 the job will fall on the performance management specialist to make sure that sudden performance peaks don't shut down critical applications.

Track us on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.


re: Biggest Storage Trend of 2012

Great post as always George and appreciate your reference to the company I work for namely Virtual Instruments. We've definitely seen the 'Virtual Sprawl' this year as non-SLA linked apps are happily being virtualised without much contention or fear now that the 'businesses' have understood the benefits of virtualization.

It's the potential 'Virtual Stall' of going that extra yard and taking the risk to virtualise your key critical applications that will be interesting to watch. Hence I strongly concur with you that "Performance Management" holds the key to ensuring that risk and performance degradation is not introduced into any potential proposals for the virtualisation of mission critical applications.

With that said it can only successfully be done by "real-time" millisecond monitoring. While there's a lot of claims by vendors that they are indeed "real-time" the average end user is unaware that they are being sold a marketing gimmick that hides the fact that their metrics are based on averages.

The utopia would be as you've stated a one dashboard performance management pane that allows you to see all the way from the app to the VM, across the SAN, all the way to the backend of the Storage. If 2012 is the year where Performance Management takes precedent, I don't think we will be that far away (-:

All the best
Archie Hendryx