Key Challenges In The Modern Storage World
Data Growth: Data growth is probably the only absolute constant in IT, and arguably, it is the inevitable cause of most operating problems. Data growth never abates; it occurs whether the economy is up or down. One can have a delightful academic debate about whether the demand for capacity will always exceed the supply, and therefore drive the development of new technologies (in terms of both capacity and performance), or whether a declining raw price for storage itself drives latent demands that become economically viable at the new cost level. Frankly, it does not matter. When you're at the top of a roller coaster, regardless of whether you were pulled or pushed up the incline, massive acceleration and g-forces are coming! The whole storage ecosystem is strapped in on such a ride.
The New IT Architectures: Virtualization drives and stresses both storage capacity and performance needs in equal measure--and a poorly designed storage infrastructure can be an anchor on the success of server virtualization and VDI initiatives. At the same time, the use of cloud models is increasing. (ESG's own research pins planned cloud expenditures in 2012 at 7% of all IT expenditure). While the attraction of both virtualization and cloud is beguiling, users all too often find that storage--and its overall management--can exhibit a type of "Ponzi" complexity behind the scenes. Very often, specific storage exists for a specific application, and huge operational issues can arise in making data available to others within or outside an organization. Imagine, for instance, if you had to change laptops to go from Word to PowerPoint; and yet, essentially, that is how storage is often deployed.
Uncertainty: With much of the world's new data produced outside of data centers--unstructured, mobile-produced, rich media being among the "new norms"--the ability to predict what storage is needed within those data centers is a crucial but increasingly difficult task. On the other hand, what is certain, is that end-users have high expectations in terms of speed, availability, and immediacy ... irrespective of application, geography or time zones.
Operational Struggles: A lack of flexibility in, and between, many storage systems simply makes completing the data management jigsaw puzzle even harder. Everything must still be protected and backed up ... whether it's a scale-up or scale-out, distributed or converged, environment. Many users have resorted (perhaps a better choice of words would be "are resigned") to over-provisioning and underutilizing their storage, simply to get the job done, even knowing that such an approach equates to throwing money down the drain!
The Crux Of The Problem
Disk technology is 56 years old this year, and tape is 60. The underlying essence of what we're using for storage hasn't changed. There's an old joke about giving someone directions that ends with, "But if I were you, I wouldn't start from here!" But is that where we are with storage? Are we really just rearranging the deckchairs on the Titanic, or is there room for optimism? The truth is that storage management is indeed complex and tough. And--sometimes unfortunately, sometimes knowingly--getting the job done effectively has for years often trumped getting it done efficiently. So, how and why did we get to where we are?
How We Got Here
Commercial computing took hold when one infrastructure stack executed one specific application for one specific purpose. The original mainframe was essentially a glorified calculator. Centralized computing was predictable and controllable, albeit expensive. And it was manageable: one processor system and one I/O subsystem.
Decentralized (or distributed) computing was developed largely to try to solve the economic (essentially those of Capex at the time) challenges of centralized computing. It yielded low-cost, commodity servers--which we promptly plugged into proprietary, large, expensive, monolithic storage boxes! Servers became cheaper and more interoperable, while storage remained proprietary and expensive. In those old days, the server was the thing that cost all the money. You picked your server by your OS. You picked your OS by your application. Storage was a "peripheral." Today, it is servers that are cheap and interoperable, while storage is still usually expensive, complex, incompatible, and just-plain-difficult. In many respects, it is the last bastion of IT awkwardness: the peripheral tail wagging the purposeful IT dog!
Additionally, two other things have changed lately: The economic downturn has put a laser focus on IT efficiency in general and storage in particular, and the lockstep between new storage technologies and new demands for capacity and performance has been well and truly broken. Without change and innovation, storage costs will escalate to an unacceptable percentage of IT budgets, and most likely do so without providing acceptable service levels either. And so we return to the mantra mentioned at the start that "Something has to change." In layman's terms, what is needed is storage that has more automation, flexibility, application- and business- linkages, resource utilization, and management/tuning ease. Never forgetting of course that there is also one thing we consistently need storage to have less of. Cost!
The good news however, is that there is a surprising number of reasons to be optimistic ... and that will be the [more cheerful!] content for next month's article.
Mark Peters is a Senior Analyst at the Enterprise Strategy Group, a leading independent authority on enterprise storage, analytics, and a range of other business technology interests.
Big data places heavy demands on storage infrastructure. In the new, all-digital Big Storage issue of InformationWeek Government, find out how federal agencies must adapt their architectures and policies to optimize it all. Also, we explain why tape storage continues to survive and thrive.