Is It Time to Deconstruct the Array?

Can IT effectively drive cost out of the array by doing point-in-time mirror splits, virtualization and other functions outboard rather than inboard? We explore.

February 16, 2005

9 Min Read
NetworkComputing logo in a gray background | NetworkComputing

If we could break a monolithic storage array into its component parts, it would consist of six elements (see "Array Deconstructed," right):

• A "controller," usually provided in pairs for redundancy; often a computer that hosts value-added software features and provides RAID on the array

• An external interface that supports the protocol (IP, Fibre Channel, parallel SCSI or other interconnect du jour) and wiring used to connect storage to server, plus physical cables and connectors

• A backplane that interfaces the controller to disk drives; usually includes a Fibre Channel arbitrated loop or FC fabric switch for interconnecting disks and controllers inside the array

• The array's power supplies, backup batteries and fans• Trays of commodity disk drives

• The array software, including management and diagnostics, user interface, virtualization and value-added features

Array DeconstructedClick to Enlarge

These six elements are represented by vendors collectively as an "integrated platform," balanced in form and function courtesy of sharp engineers and lots of R&D dollars. Vendors also like to point out the advantages to consumers of "one throat to choke" if anything goes awry, and that simplified management means fewer personnel can oversee more storage.

However, vendor assertions are not proving out in many shops. Integration hasn't made storage platforms better performers or storage-management personnel more productive. IT managers from one Fortune 500 cellular communications company told us their brand-name integrated storage arrays, recently acquired and cobbled together into a Fibre Channel fabric, haven't delivered anything resembling the vendor's promised value. Software included with the platform contains a significant amount of functionality for which the company has no use, while other features considered critical for realizing promised ROI (return on investment) from the acquisition--notably platform-management functionality--are incomplete or buggy. The CIO likened the software feature set on the integrated array to an old WAP (Wireless Application Protocol) phone--a product designed to deliver functions sought by such a broad range of users that it never did anything well. It's a classic case of the "jack of all trades, master of none."Then there's the money. Integrated arrays cost big bucks. With IT budgets under close scrutiny, soaring storage costs stand out like nails just waiting for an auditor's hammer.

Depending on which analyst or IT manager you consult, storage now accounts for between 30 percent and 70 percent of annual IT hardware spending. So, if on-board software features on the array could be externalized, and the same functionality could be delivered just as cheaply and reliably outside the box, a compelling business case for deconstruction could be made.

Moreover, externalizing array functions would break the stranglehold array vendors gain with proprietary software. With software components split out of the array, technology lock-ins that enforce consumer loyalty while freezing out competitors would be history.

Oversubscription with underutilization, a universal condition in corporate storage infrastructures, also could be effectively addressed by fielding more intelligent and economical purpose-built storage. Just as a PC or server configuration is driven by the class of applications the computer hosts, storage should be built based on the data-hosting characteristics of the bits produced and used.

In a true deconstructed storage infrastructure, all arrays would be designed to support applications and their data. Matching platforms to application classes would minimize hardware costs (buy only what you need and at commodity prices) and optimize performance (select appropriate disk, interface and interconnect options).Although this one-infrastructure/one-platform idea may elicit groans from some IT managers, many small vendors are already fielding platforms that support specific classes of applications or data. These vendors are saying, for example, that their products are suitable for reference data--that is, data that's frequently read but rarely modified. Other offerings, using different disk drives, interfaces, interconnects and support software, are described as suitable for production data that's modified frequently. It's a start.

To make this purpose-built approach work, there's one non-negotiable prerequisite: an overarching management software framework. Its pedigree is less important than proof that it can make a heterogeneous storage infrastructure visible to the operator in a coherent and application-centric manner. Excellent products are available from platform-agnostic vendors, including Computer Associates, while Microsoft and other OS vendors are co-opting storage-management functionality into their OSs.

Special-function software, such as for storage provisioning and data protection, should plug into the infrastructure rather than provided on hardware. For example, instead of using on-controller virtualization services, choose a hardware-neutral virtualization product, like those from DataCore Software, FalconStor Software, Microsoft and Veritas, to give you more and less expensive options. For continuous data protection, you might be better served by standalone software products, such as Revivio's Time Addressable Storage, rather than point-in-time data-mirroring software wedded to an array that uses $180-per-gigabyte disks for making mirror-split volumes.

Given a top-notch management framework, special-function software provided as an adjunct to the storage hardware, rather than embedded in it, could deliver the same or better provisioning and protection than what storage vendors offer on their arrays.

Purpose Built Storage Infrastructure

Click to Enlarge

Getting Over the Initial Hurdles

Unfortunately, there are many obstacles to a deconstructed storage future. A major one is indifference to the idea of deconstruction among many technology managers.

Remember the 1980s, when common wisdom held that "no one was ever fired for buying IBM"? The current attitude toward big iron is similarly complacent; selecting an integrated array from a brand-name vendor is the "path of least resistance" to budgetary approval.

The CIO for a Midwestern electrical utility told us that, regardless of her views about the technical adequacy or appropriateness of big-iron storage solutions to her organization's application set, array-purchasing decisions typically are made by non-technical senior managers, whose approval has been cultivated by vendors through a combination of "strategic relationship building," paid analyst endorsements and out-and-out distortions of platform capabilities and costs.

Even when purchasing decisions remain in the hands of technology managers, many have yet to question the value proposition of large arrays. Some still have little time for the price-performance nuances of storage. Others, who have made an effort to understand the costs and capabilities of storage-technology alternatives, have abandoned the quest, complaining that the "market speak" of vendors is just too difficult to crack.Woefully few brand-name vendors provide the data required for informed consumer decisions. Try to find a suggested retail price in marketing literature or on Web sites. Slippery value statements often are substituted for real performance data. And at least one brand-name vendor prohibits its customers from publicly disclosing the details of performance received from the vendor's high-end arrays.

In their own defense, vendors argue that performance depends on the particulars of the customer's environment. Moreover, among large enterprises, they say, feature sets rather than MSRP are the basis for most sales. This explanation might hold water if storage array feature sets were sold through an easy-to-understand a la carte approach that would let organizations customize their products with just those drive types, interfaces, controller types, RAID levels, resiliency features and interconnect methods needed to support their applications.

Instead, vendors take their cues from the Model T: "You can buy the product in any color you want, as long as it's black."

Turning a Profit

Ever pay $3.49 for a big bag of chips only to find it just one-third full? If you complain, the grocer will point to the fine print that says, "Contents may settle during shipping." Likewise, the packaging of an integrated array often conceals vendor tricks for sustaining sky-high profit margins. For example, the disk drives in the array may have a 300 percent markup over the retail price you'd pay if you purchased the drives separately at your local computer store. Vendors argue that this hike is warranted by the extra effort they expend to format the drives to a special block size and to perform quality assurance prior to delivery. This rationalization ignores the fact that formatting utilities are widely available at little or no cost, and that most enterprise-class disk drives are tested by their manufacturers and sold with generous replacement warranties.The bottom line is that, just as Ruettgers observed nearly four years ago, storage arrays are becoming a commodity. Integrated software features do add value--when they function as advertised and address the consumer's application data-support requirements. Purpose-building infrastructure under a common management framework will help bring storage expenses under control.

The time to deconstruct the array has come.

Jon William Toigo is CEO of storage consultancy Toigo Partners International, founder and chariman of the Data Management Institute and author of 15 books, including The Holy Grail of Network Storage Management (Prentice Hall PTR, 2003). Write to him at [email protected].

Imagine that the brand name stamped on your big-iron storage array no longer matters. No more paying through the nose for bloated software feature sets slapped onto commodity hardware. No longer is storage the monster consuming your IT budget.

If that sounds like a fairy tale, you've ingested too much storage-vendor Kool-Aid. The reality is that all storage array hardware is essentially the same. Only cosmetic differences--the work of Italian design firms hired by vendors to make over the faceplates of their otherwise bland boxes--differentiate them. The reason you're still cutting corners elsewhere to feed the storage beast is software--all those capacity, data-management, virtualization, mirroring and data-protection features. If we could pull this functionality off the array and on to purpose-built storage systems--in other words, deconstruct the big iron--we could take advantage of commodity-priced hard disk drives. Of course, this requires a good storage-management framework; without that, all bets are off. But if developing a multitiered infrastructure that's both cost-effective and highly manageable sounds appealing to you, read on.The proliferation of storage technologies and the commoditizing impact of standards have provided many design options that smart consumers can use to build customized, purpose-built platforms. Moreover, the trend among component vendors to categorize their wares--disk drives, controllers (interfaces), interconnects and power supplies--based on their suitability for different data usage also facilitates purpose-built storage.

Take Seagate, which goes to great pains to guide consumers through its disk offerings by classifying certain drives as suitable to, for example, "enterprise applications with frequent updates." Similarly, controller vendors commonly target their wares by specific applications, accessibility characteristics and data-update frequencies. One warning, though: Many vendor product characterizations are self-serving. You must understand your applications and their data-storage requirements before you go shopping for custom platforms.

Over time, and with enough participation from end users, it may be possible to shortcut the customization process and create profiles that identify suitable component mixes based on application class and usage characteristics. Until that happens, you must do the heavy lifting of analysis and component-matching on a one-off basis. The good news is that some vendors, including Adaptec, are already considering the use of configurators to help streamline the defining and ordering of purpose-built storage platforms.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights