Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Are Storage Controllers Overworked?

In a recent webinar we discussed how one of the storage challenges we are seeing increasingly occur in the virtualized server infrastructure is that the storage controllers are becoming overworked. The modern storage controller is asked to do a long list of functions beyond just manage LUNS and RAID protection. The additional load of supporting hundreds of virtual servers may be the straw that breaks the camel’s back.

In the typical storage system, the storage controller is essentially a computer that is built into the system. For years, the processing power on these controllers went largely underutilized by the typical data center, often hovering at less than 5% average utilization. Now though three events are occurring that are beginning to push storage controllers to their maximum.

First, server and desktop virtualization are making every host that connects to the storage system a fire-breathing I/O demon. In the past, only a small fraction of servers placed significant I/O demand on the storage system. Now the less-demanding servers have been consolidated onto hosts with demanding workloads. The result is every server makes the storage system and its controllers work harder.

[ For more on managing performance issues, see Managing Virtual Machine Performance Problems. ]

The second event is the advancement in storage services. Again, no longer do storage systems simply provision out storage and manage RAID protection. Now they perform snapshots, thin provisioning, and automated data tiering, to name just a few tasks. Each of these takes a toll on the storage controller. Snapshots and thin provisioning require that the storage controller dynamically allocate capacity in real time as the new data is being written. Automated data tiering, potentially the most storage processor-consuming task so far, requires that data be continuously analyzed so that data that is either too old or too active can be moved to a more appropriate tier of storage. The analysis takes processing power, and the movement of data takes processing power (and bandwidth).

The one area that the storage system could count on to give it time to catch up was the latency that is inherent to mechanical hard drives. The time that the processor had to wait for heads to rotate into place gave it time to perform and manage various functions. But a third event is now removing that last hiding spot: flash-based storage. There is almost no latency with flash-based storage, and there is certainly no rotational latency. They will give and receive data as quickly as it can be sent.

As the environment, especially the virtual environment, scales the combination of these three events, storage controllers are becoming a key bottleneck in overall system performance. The solution is to build faster storage controllers, offload some of the functions to the attaching hosts, or design a more scalable storage controller into the storage system. We will look at each of these options in our next entry.

Follow Storage Switzerland on Twitter

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.

The Enterprise 2.0 Conference brings together industry thought leaders to explore the latest innovations in enterprise social software, analytics, and big data tools and technologies. Learn how your business can harness these tools to improve internal business processes and create operational efficiencies. It happens in Boston, June 18-21. Register today!