The ability to seamlessly scale storage capacity and performance has been a constant need for the data center and a constant claim from the storage vendors. As we covered in a recent column, the choices are typically to select a storage system that has enough upfront capabilities to meet your needs far into the future (scale up storage) or to select a system that can have performance added to it as capacity is added (scale out storage).
Scale out storage seems to have that ideal pay as you grow capability for addressing the never ending need for performance and capacity, yet scale up storage continues to be selected regularly. This is primarily because scale out storage has a few challenges it needs to overcome.
First, the minimum purchase for most scale out storage infrastructures is at least three nodes. For the small to midsize business, this may be more storage capacity and performance than they need for the next five years. A smaller less scalable storage system may be less expensive an option even though it may be overkill from a performance perspective.
The second challenge, again especially for the SMB, may be that there is almost too much performance in a scale out design. As mentioned above, each scaling step in a scale out system is done by adding a node. These nodes are essentially servers with internal hard drive capacity. As the number of nodes continues to stack up to meet the capacity demand the combined processing power may go largely unused. The excess processing power may get worse before it gets better. Intel is set to increase the number of cores per chip again in the near future. This means significantly more processing capability for roughly the same amount of money.
For the large enterprise, both of these problems are non-issues, at least for now. But for the SMB market it is a real challenge right now and is one of the reasons they turn to multiple scale up storage systems to meet their needs. There are several solutions to this problem.
The first may be to run the storage intelligence or the storage clustering software as a virtual machine within a virtual infrastructure. In this implementation, the storage node becomes a virtual machine on each physical host in the environment. The storage attached to that host then becomes part of a cluster-wide shared storage pool. As we discuss in our article "Resolving The Problems With Server Side Flash", this is a great way to leverage server side storage technologies like PCIe SSD or inexpensive high-speed SAS. The storage node as a virtual appliance option brings a natural scale to the environment. As more physical hosts are added more storage processing power is available.
Another option is to run each node on a CPU core in the physical server. As we discussed in our briefing note on scaleable object storage, with this concept you can use two physical (for redundancy) servers each with quad core processors to build a "eight-node" system. Using this approach keeps physical node count and the physical space they require down while at the same time maximizing the use of the available CPU resources. It still provides predictable performance since each node has a core dedicated to it.
Processors are getting faster and scale out storage vendors are going to have to either move to more of a software model to efficiently use processor resources or give their storage nodes more work per node so that they don't end up with too many wasted CPU cycles. The value in the two approaches mentioned above is not only do they make the processors earn their keep, they also should save the data center a lot of money and a lot of floor space.
Follow Storage Switzerland on Twitter
The Enterprise 2.0 Conference brings together industry thought leaders to explore the latest innovations in enterprise social software, analytics, and big data tools and technologies. Learn how your business can harness these tools to improve internal business processes and create operational efficiencies. It happens in Boston, June 18-21. Register today!