Breaking Point: 2010 State Of Enterprise Storage Survey update from January 2010
The recession doesn't seem to have put a damper on the growth of enterprise data and storage needs. Too bad our infrastructures aren't keeping pace.
January 28, 2010
We're sure most IT pros will agree that the best thing about 2009 is that it's over. CIOs were forced to run much tighter ships, with capital expenditures postponed or put on hold. Forget introducing innovative storage technologies--or sometimes, even doing basic maintenance, despite the fact that many of our infrastructures are bursting at the seams.
In fact, our InformationWeek Analytics 2010 State of Enterprise Storage Survey of 331 business technology professionals reveals an alarming state of affairs: When asked about their top storage concerns, nearly half of respondents say they have insufficient resources for critical applications. Contrast that finding with a year ago, when data loss was the top worry of the 328 technology pros we surveyed; lack of resources was cited by just 30%. Other 2010 results also reflect a grim financial picture. Compared with a year ago, more IT pros say they have insufficient budgets to meet business demands, insufficient tools for storage management, and insufficient storage resources for departmental and individual use. For some, the reality of stretched resources is sending a harsh wake-up call.
Highpointe Hotel had a major hiccup on its main production server in late December and thought it had lost its RAID system, which would've spelled disaster--not just in terms of guest reservations, but year-end financials, payroll and HR data, and network files. "Fortunately, we didn't, but it brought into clear focus the danger we were in by overextending a critical server because of a lack of resources," says Mark Pate, IT director at Highpointe, a hotel management and development company. "The good news is that I quickly got approval for two new Dell servers, which are being staged to go live within the next two weeks."
Meantime, pressure to meet stringent regulatory and data management requirements isn't letting up, and neither is the rate of data growth. In 2009, 75% of survey respondents reported administering more than 1 TB of data, while 24% managed more than 100 TB. One year later, 87% of respondents manage more than 1 TB, and 29% administer more than 100 TB. A year ago, 21% cited data growth rates below 10% per year. Now, just 15% say they have that (relatively) manageable level of expansion.
We wouldn't be surprised to see the percentage of respondents managing less than 1 TB of data drop into the low single digits in next year's survey. After all, you can now buy SATA disk drives with double that capacity. Even larger drives are slated to be available soon.
Clearly, a terabyte ain't what it used to be. Digital images, high-definition video, audio files, and even virtual disks--Microsoft VHD and VMware VMDK--suck up storage capacity much more quickly than the Office documents of years past.
So what's a storage manager to do when budgets are stagnant or even shrinking but data volumes and storage requirements just keep right on growing?
The first line of defense is better utilization of existing resources. We've all gotten used to playing fast and loose with network gear. IT commonly oversubscribes bandwidth, switching backplanes and router capacities--and we usually get away with it. However, when it comes to storage resources, we've barely approached the limits of full utilization, never mind oversubscribing. The compartmentalization typically seen in direct-attached storage environments exacerbates this situation, and the resulting waste of storage resources has been one of the main benefits stressed by SAN vendors.
Which brings us to our first piece of good news.
As in our 2009 survey, the storage technology used most by respondents is good old-fashioned direct-attached storage. But interestingly, given economic pressure, we saw greater penetration of storage area networks, both Fibre Channel and iSCSI, with Fibre Channel putting in a better showing.
Given tight capital budgets, how did SANs--much less Fibre Channel SANs--do so well?
The simple answer is that vendors have responded to market conditions and developed more affordable options for Fibre Channel and iSCSI SANs, host bus adapters, and Fibre Channel switches. There are still plenty of high-end SANs, with stratospheric prices to match. But we're seeing more lower-cost systems, and not just from small or startup vendors. Dell, EMC, Hewlett-Packard, IBM, and NetApp are among the established players aiming at budget-stretched buyers. Lower price points combined with SANs' promise of greater utilization, streamlined management, and scalability and availability seem to be too enticing to pass up.
Speaking of vendors, we asked readers for their preferences in four categories: leading backup and archiving vendors, leaders in green storage, vendors they're using for tier 1 and 2 storage, and leaders in deduplication. We didn't get many surprises: EMC was in the Top 3 across all four areas. But HP captured top marks for green storage initiatives, and Microsoft broke into the Top 5 leaders in deduplication.
Small vendors and startups aren't sitting idly, by any means. They continue to push the pricing envelope while offering solid innovations for the buck across a variety of specialties.
Other factors working in buyers' favor include the improving economics of 10 Gigabit Ethernet and final ratification of the Fibre Channel over Ethernet protocol. FCoE is a promising storage advance that lets companies combine Fibre Channel and Ethernet networks into a single converged network while reducing the complexity and cost of maintaining and managing multiple networks and fabrics. Unlike iSCSI, FCoE isn't IP-based, and it provides the same lossless network functionality as standard Fibre Channel, making it suitable for demanding storage applications.
Breathing Room
While data retention regulations (and legal and corporate interpretations of them) are in constant flux, the good news is that no significant new regs came online in 2009. A pile of rules were already in place, and we saw major additions and changes to them in 2007 and 2008. The relative calm in 2009 allowed us to catch up and develop more effective policies.
Take e-mail. Fully 87% of survey respondents have defined retention periods, and just as important, a quarter of these respondents save mail for less than two years. Nearly half (46%) have policies to retain e-mail for less than five years. This is a welcome departure from the days of saving everything indefinitely "just in case."
Smart retention and archiving dictate moving infrequently accessed e-mail, Office documents, and database files off of expensive, high-performance storage tiers and onto less costly media.
What's Hot Now
Here are some topics that should be on storage admins' radar for 2010.
Business continuity and disaster recovery: A significantly higher percentage of organizations represented in our 2010 survey have implemented and regularly test disaster recovery and business continuity plans compared with last year (36% vs. 28%). But there's still much room for improvement: 43% of 2010 respondents say their companies have DR/BC plans but rarely test them, and 16% have no plans in place but expect to implement this year. Five percent have no strategies for the continuation of their businesses.
There's never been a better time to initiate or update your disaster recovery and business continuity plans: We're tracking an exciting technology landscape--replication with deduplication, faster and less expensive tape drives and libraries, backup to disk, and storage and file virtualization--and providers are offering innovative infrastructure and network connectivity options for backup and replication.
Do, however, take a fresh look at what business users consider vital to their survival.
"We've been surprised how critical our Exchange server and BlackBerry Enterprise server have become," Highpointe Hotel's Pate says. "Our users can live without their accounting, payroll, HR, and time and attendance applications for a couple of days, but only a few hours on the Exchange/BlackBerry apps."
Deduplication: While dedupe technology is critical to keeping storage growth in check, it's not a magic bullet. Dedupe tends to work fine for archives, backup, and replication. It can make the process of replication much more efficient by significantly reducing the amount of data that's transported. But dedupe can run into problems when dealing with encrypted data and more performance-oriented applications, such as e-mail and databases. In addition, many disaster recovery and business continuity policies, by design, mandate replicating copies of data to multiple sites.
EMC's acquisition of Data Domain--and the bidding war it had to engage in with NetApp before the deal was done--highlights the importance of this technology in today's storage market. But not everyone is convinced. "While the industry is head-over-heels for this technology, we believe it's only a Band-Aid for poor backup practices," says Tim Pitta, a marketing executive with Seven10 Storage Software, which advocates a file-system-centric archiving model. "IT managers are backing up unchanging content and then using dedupe to fix it. Why not just implement a smart archive, reduce backups, and nearly eliminate dedupe?"
Yes, and we should all eat five servings of vegetables every day, too. The fact is, classification is hard. For some shops, dedupe may well be less expensive and disruptive.
Storage and file virtualization: The abstraction process that presents one or more physical storage resources and devices to a server or application as logical storage units hides the complexities of individual devices and allows for more efficient and simpler provisioning and management, even across heterogeneous networks. It also simplifies processes such as data migration, replication, and backup. Storage virtualization may be implemented in software, hardware, or a hybrid model, though software implementations are likely to dominate.
This year's survey reveals slightly increased interest in storage virtualization compared with 2009, but adoption is nowhere near as high as we believe it should be, particularly given the increased use of SANs reported by respondents; 45% either employ storage virtualization now, or plan to within 12 months, compared with 35% in 2009.
Plans for deploying file virtualization are even more puzzling and disappointing. In fact, a lower percentage of survey respondents reported deployment of file virtualization in our 2010 survey than in 2009 (11% vs. 14%), and the same exact percentage of respondents reported no plans to use file virtualization (29%) and/or no knowledge of the technology (15%).
File virtualization is an abstraction layer that presents a consistent logical path to clients to access a file, regardless of the physical location of that file. With file virtualization, data may be located on various servers and network-attached storage devices, but clients access the files through their virtualized logical paths without needing to know physical locations. File virtualization isn't always easy to implement and can carry significant up-front costs as existing namespaces and paths are replaced with virtual substitutes. But once it's in place, organizations will be able to migrate data much more effectively without impacting users. An example of file virtualization is Microsoft's Distributed File System, which lets IT group shared folders across different file servers and presents them to clients as a virtual tree of folders.
Thin provisioning: We need a smart way to oversubscribe the storage pool, and this is precisely the problem that thin provisioning is quite capable of addressing. Yet again, we're seeing less than enthusiastic adoption. Just 20% have the technology in production use, compared with 58% who say they have no plans. In today's economic environment, we find this lack of interest difficult to explain, particularly since many vendors now provide thin provisioning as a standard feature. What's not to like?
Automated tiering: Likewise, a multitiered storage infrastructure makes a lot of sense to enable efficient operation while maintaining end-user service levels. But manually migrating data between tiers is a difficult and expensive process to maintain over time. Enter automated tiering, another promising but underappreciated storage technology and one that seems to be the focus of many small, cutting-edge vendors. Automated tiering came in just below thin provisioning on our list of technologies in use, with just a 14% reporting use. Mark our words: Eventually, we'll wonder how we lived without this technology to match storage resources with space and performance requirements.
Encryption: Interestingly, while 36% of respondents cite encryption as a very important storage technology, we barely inched up in terms of use of encryption on backup tapes--42% are encrypting tapes in 2010 vs. 39% in 2009. This is indefensible given native hardware support for encryption in newer tape drives, not to mention the continuing parade of high-profile PR disasters involving companies losing unencrypted backup media.
We'll allow that encryption isn't as simple as selecting the option in a checkbox. Policies must be implemented and key management procedures developed and followed to avoid data loss. But with disk-to-disk backups and replication across the wire in use by more and more companies, measures are needed to secure enterprise data while in transit and wherever it's stored.
Solid state drives and MAID: These disk technologies address opposite ends of the performance spectrum. Solid state drives are expensive and suitable for only the most demanding performance requirements, whereas massive array of idle disks (MAID) systems are ideal for storing seldom-accessed files. But what they have in common is the ability to reduce energy consumption and cooling requirements, a critical factor in 2010. Consumption of electricity in the data center grew a stunning 191% between 2000 and 2006, according to the EPA. Over that time period, storage grew from the least power-guzzling sector to the most, consuming 32.3% of energy used, says the agency. And this upward trend will continue as data volumes keep growing, unless meaningful operational and technological changes are implemented; both of these disk types can help here.
Finally, we asked about expected use of public cloud offerings. Last year's survey was decidedly negative, but respondents have warmed up to the idea: 34% are considering the cloud compared with 19% in 2009. Still, impressive as a 15-point jump may be, 54% of respondents still say "no" to any plans of implementing cloud storage. "I'm really not clear on how these 'as a service' vendors expect that Internet bandwidth in a multinational corporate setting is more plentiful and cheaper than disk drives," says one respondent. "Maybe in 20 years, but not anytime soon."
Although economic conditions are improving slowly, the list of planned projects cited in our survey shows that IT is not banking on a cushier financial landscape in 2010. Rather, we're entering the new decade with a laser focus on doing more with less and maximizing storage utilization. So don't write the cloud off--even risk-averse companies might find niche uses, for example, to store digital images that are accessed infrequently. These services should join virtualization, dedupe, thin provisioning, and other emerging technologies in our toolboxes as options to deal with exploding data volumes and tight budgets.
Behzad Behtash is is an independent IT consultant who previously served as CIO of Tetra Tech EM and VP of systems for AIG Financial Products.
You can write to us at [email protected].
About the Author
You May Also Like