In my recent column Are Storage Controllers Overworked, I discussed the challenges that the storage controller is facing today. The controller sits between the servers that need to read and write data and the storage that will eventually deliver and store that data. The controller is now under assault thanks to densely packed virtual machines and less-latent, high-performance disk systems based on solid-state storage.
There are three solutions to fixing a "broken" controller: You can make it bigger and faster; offload some of the controller's responsibilities; or design a more-scalable controller architecture like those commonly found in scale-out or cloud-storage designs. Each of these three approaches have merit; which one is right for you depends on your needs. In the next few columns I'll dive more deeply into each one.
Let's start with building a faster storage controller. Dual-controller architectures are still the dominant type of storage architecture in data centers today. It is all too easy to dismiss them, but that would be a mistake for many data centers. I was asked recently why dual-controller storage architectures still exist--why haven't they all been replaced by scale-out architectures? The answer is that dual-controller architectures can provide plenty of performance and scalability to a large number of data centers. Despite the theoretical limitations, in practice many organizations won't outgrow their storage controller investment for years after initial purchase, if ever.
As I wrote in The Advantage of Scale Up Storage, scale-up architectures do have some advantages over some of the newer architectures on the market. First, they are simple to implement, typically with a single head unit and fewer network connections. Fewer plugs equals simpler installation. Second, the capabilities and affordability of the modern Intel processor allow scale-up architectures to handle a wide range of performance and capacity demands. Finally, they are typically a less expensive design from an upfront cost perspective. There simply is less "sheet metal" involved in a scale-up storage architecture.
The challenge with scale-up storage is range, not necessarily limit. Also, scale-up designs are challenged the most when both scaling vectors--capacity and I/O performance--need to scale simultaneously.
For example, if some day you know you will need a scale-up storage system that is high capacity, high performing, or both, you need to start with a system that can deliver much if not most of that scale up front, even if you don't need that scale for a few years. That means wasted investment of IT budget because that capacity and performance will continually become less costly over time. In IT, the longer you wait the cheaper things become. With these systems, it is hard to start small and to end big.
Like any IT project that hits a wall, there are always more than a few workarounds. Several vendors have the ability to live-migrate their volumes without the aid of a hypervisor between multiple storage systems. Although you do have multiple points of management, workload balancing efforts are greatly simplified.
Alternatively, software can be added to bring similar scaling functionality to scale-up storage that scale-out systems enjoy natively. As I discussed in The Value Of Open Source Storage Software In The Enterprise, solutions are available to leverage commodity servers and storage to deliver scale-out results.
The second fix for fixing the broken controller--offloading controller responsibilities--might be used instead of or in conjunction with building a faster controller. We'll look at offloading next time.
Follow Storage Switzerland on Twitter.
Put an end to insider theft and accidental data disclosure with network and host controls--and don't forget to keep employees on their toes. Also in the new, all-digital Stop Data Leaks issue of Dark Reading: Why security must be everyone's concern, and lessons learned from the Global Payments breach. (Free registration required.)