With 2012 quickly approaching, many storage managers are making plans to refresh their storage infrastructure, or at least replace aging components. With the many new storage introductions and innovations we've seen in 2011, it can be hard to decide what features and capabilities your next storage system should have.
One of the biggest challenges is that even the basic features you might expect from the storage system are no longer required, thanks to storage virtualization, the storage hypervisor, and the increasing data management intelligence of the operating system itself. Capabilities like snapshots, replication, and thin provisioning can all be provided outside of the storage array. In fact, you may find that you're better off with a less feature-burdened system which would lower cost, potentially raise performance, and better complement data services provided by software.
A good first step might be to understand the environment's requirements and the features and capabilities you need to accomplish the job. Look for those holistically, instead of being dependent on a single system to provide them.
Whether you decide to use a single storage system from a single vendor or a mixed approach using some form of virtualization, another key decision will be whether and how much solid-state storage capacity you will want to use and where you'll want to use it. Not all environments or applications need or can take advantage of solid-state storage, but the reality is that the decreasing price of solid-state storage, combined with the increasing popularity of server virtualization, means that many environments can create the random, multi-threaded I/O workloads for which solid-state is ideal.
As a result, I would make sure that any storage system I select can fully support solid-state storage. It should, however, be able to do more than just "support" solid-state storage, it should be able take full advantage of it, meaning that instead of providing an incremental upgrade to storage performance, the system, when combined with solid-state, should provide a revolutionary experience that enables new applications or new levels of virtual machine density.
Because of solid-state storage, and with it the performance gap between the hard disk tier and the solid-state tier, the storage environment (either through the system or add-on software) should be able to provide some level of automated performance management. As we discussed in "virtualizing storage performance," storage managers no longer have the time nor resources to constantly balance workloads across the variety of storage tiers available to them. Expecting manual performance management will likely only result in data being placed on one storage tier for its entire lifespan.
The use of solid-state as a storage tier will also change the type of storage you use with it. Thanks to the cost reductions in solid-state storage and the potential for price increases in mechanical storage (thanks to the hard-disk drive shortage) we think that most storage systems will no longer have a high-speed mechanical disk tier. Instead, they will have a slightly larger solid-state tier combined with a high-capacity, cost-effective, low-performing mechanical tier.
If, where, and how you use solid-state storage as part of your 2012 storage refresh is of course only part of the decision. There are many more options for storage managers to review prior to making their next storage purchase. One of those is whether or not you should use a scale-out storage system, and that will be the subject of our next entry in this series.
Track us on Twitter