Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

2014 State Of Storage: Cost Worries Grow

Download our complete February Tech Digest issue on enterprise storage, distributed in an all-digital format (registration required).

Business users worry about storage growth like the NSA worries about your privacy. Sure, users might pay lip service to the virtue of restraint, but when it comes down to it, they want their stuff. And their stuff? It's digital content, and it's feeding double-digit annual growth in the amount of data under management, according to our 2014 InformationWeek State of Enterprise Storage Survey.

At 27% of companies, IT is wrangling 25% or more yearly growth. The main culprit: databases or data warehouses. Money's still tight, with 25% saying they lack the cash even to meet demand, much less optimize performance by loading up on solid state. IT leaders face a difficult "pick two" conflict among performance, capacity, and cost.

Data growth is an inescapable trend. In its "The Digital Universe in 2020" report, IDC estimates that the overall volume of digital bits created, replicated, and consumed across the United States will hit 6.6 zettabytes by 2020. That represents a doubling of volume about every three years. For those not up on their Greek numerical prefixes, a zettabyte is 1,000 exabytes, or just over 25 billion 4-TB drives. Or look at just one company as an example: UPMC, a major healthcare provider and insurer, has about 5 petabytes of data today, and that volume has been doubling every 18 months.

For enterprise IT, we see three conclusions. First, don't count on all-solid state storage saving your sanity. Don't get us wrong; it's the most disruptive digital storage technology ever. But talk of hard disks joining floppies on the ash heap of IT history is premature, for reasons we'll discuss. For now, most storage vendors offer hybrid architectures that can dynamically vary the flash-to-disk ratio for changing workloads.

[How will a better-connected world save you money? Read Internet of Things: 8 Cost-Cutting Ideas For Government.]

Second, you need to up your use of scale-out arrays, distributed file systems, and storage virtualization, a.k.a. software-defined storage (SDS). A software-defined storage strategy is your best bet to automatically place, migrate, and manage data and applications on hybrid arrays to meet demand without breaking the budget. Today's applications, particularly mobile apps, are sensitive to any variance in storage performance, which means architectures must be optimized for performance as well as capacity. We think 2014 is the year of SDS, and in the nick of time.

Third, the cloud has matured into a legitimate tier in the enterprise storage hierarchy. Now, IT must prevent cloud use, particularly SaaS, from creating new data silos.

As to what's driving demand, greater use of those cloud services and social networks along with the proliferation of smartphones as information clients plays a part. Migration of all media, particularly TV, from analog to digital formats is a culprit, too. But for companies, what's really coming at us like a freight train is machine-generated data, notably security images and "information about information." This last bucket includes everything from the Internet of things, in which devices generate information about their operations and environments, to analytics software that must crunch vast troves of raw data to produce the insights businesses crave.

The solid state revolution

These days, compromise is a dirty word, and not just in Washington. App developers and end users want it all: blazing performance and unlimited, dirt-cheap capacity. Solid-state storage has done more than any other technology since the hard disk to meet these demands, particularly where random I/O is critical, and its importance can't be overstated. In fact, as flash densities increase and costs plummet, some industry experts argue we're on the verge of all-flash datacenters.

We say be careful what you wish for because flash capacity still requires trade-offs in reliability and data protection. Solid state can give you hundreds of terabytes of capacity or hard disk-like longevity, but not both at the same time.

High-density flash designs achieve capacity at the cost of media endurance, notes Radhika Krishnan, VP of marketing at Nimble Storage. A memory cell can be written to a limited number of times before it fails, so the overall system has to be able to tolerate random bit errors and dying memory chips. To get technical, blame it on an inherent wear-out mechanism in flash technology caused by repeated tunneling of electrons through an insulating layer at relatively (at least for semiconductors) high voltages.

The primary means of improving flash density, and hence capacity, has been through tighter fabrication geometries and via multilevel cell designs, in which each memory location can store more than one bit of information. The complexity involved in fighting these limitations explains why enterprise-grade SSDs sell for a substantial premium. It's also why storage vendors are fixated on hybrid arrays aimed at delivering the best of flash and hard disks in one box.

Download our complete February Tech Digest issue on enterprise storage, distributed in an all-digital format (registration required).

Kurt Marko is an InformationWeek and Network Computing contributor and IT industry veteran.

In the 17 years since we began the InformationWeek U.S. IT Salary Survey, more than 200,000 IT professionals have completed the questionnaire. Take part in the 2014 U.S. IT Salary Survey -- it's a great way to prepare for your next salary review, or that of the people you manage. Survey ends Feb. 21.