Utility Computing: Vendor Doctrines

Big-name vendors want you to take it on faith that theirs is the true path. Don't convert just yet

November 21, 2003

13 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Embryonic as utility computing may be, it's been the subject of heated discussion for at least two years. In mid-2000, Hewlett-Packard touted a novel way of pricing its then-new HP9000 Superdome line of high-end Unix servers: HP would let customers "pay as they grow" rather than require up-front payments. Although HP described this as "utility pricing," we prefer to call it expansion pricing: As usage went up, so did the cost. But if usage went down, the bill didn't. Still, HP did use its financing arm to offer (mostly very large) customers innovative pricing flexibility; it now calls this initiative iCOD (Instant Capacity On Demand).

Since then, HP and other sellers of high-end servers and storage systems have continued to create more flexible purchasing schemes that are lumped under the utility-computing header. For example, HP now offers three nonconventional purchase options for its server and storage systems. Besides iCOD, there's Managed Capacity, where resources and management expertise are purchased on a per-use basis; and Metered Capacity, which is essentially iCOD with the ability to dial down capacity (and costs), then dial them up on demand.

Each of these options is interesting and makes sense for at least some customers. However, be aware that every vendor who offers this type of purchasing option is doing so with an eye on the bottom line. In fact, most of these programs are offered through the vendor's financing arm, and those financing operations have their own bottoms to watch. The decision is a lot like whether to lease or buy a car--it all depends on the buyer's circumstances. Risk management should dictate your direction more than a simple attempt to lower purchase costs.

That's because these new purchasing models lower risks in two ways. First, companies with highly volatile and unpredictable usage patterns can work out deals where payments for resources approximate their usage and, therefore, their income. Second, every time equipment in the field needs an upgrade--scheduled or not--the vendor and owner are incurring costs and risk. The cost comes in the new hardware and in the human-resource costs of sending technicians out to do the work. The risk side of the equation is that every time you open up a computer or storage system to install new hardware, something could go wrong. Loading up a system with more resources than a customer needs can be very effective for vendors in terms of managing their field staff and risk costs.

So far, we've been talking about high-end systems--high-value, unique resources, such as very large database servers or large centralized storage systems. These are high-margin items for vendors, so doing some creative financing to retain old business and win new is all part of the game. If that were all there was to utility computing, it would hardly be the next big thing. In fact, we think that in the short run, this pricing-game version of utility, or on-demand, computing is one only large IT organizations should be playing.

What's interesting for a wider range of companies is the effort to create hardware, software and management tools that let IT assets be allocated and reallocated quickly, efficiently and, eventually, automatically as need arises. This is the holy grail of utility computing and an area in which every vendor claims to play. But as with any new technology, the bigger the vision, the less compelling and immediately useful early offerings are. It's going to take three years or more to rearchitect software systems to fully incorporate the promise of utility computing.So what's here now? On the hardware side, new options for high-end servers, the advent of blade-based servers and the increasing range of modular storage systems will benefit IT shops of almost every size. These cheap, easily duplicated products offer IT architects the ability to field fleets of redundant servers and create huge storage systems at bargain-basement prices--good news for beleaguered capital budgets. Vendors like Dell do particularly well in this arena because the utility-computing model values modularity, price and standards adherence above unique (and usually proprietary) vendor-added value, at least as far as hardware is concerned.

However, there are things to watch out for. Any firmware inconsistencies and embedded hardware substitutions will make it problematic to treat every blade server as a clone of the first. Yet it's the clonelike nature of these cheap servers on a blade that makes them appealing. With that in mind, seek to create pools of resources--such as CPU cycles, network bandwidth and storage--and allocate those resources in as automated a way as possible. That way you'll be able to respond to the needs of your company more quickly, and keep fewer resources idling.

Of course, while these utilization and productivity improvements sound good in broad terms, the devil is in the details--particularly the management software details.

Cult of Virtualization

The management-software challenge can be illustrated by the diverse utility-computing stories told by EMC and Veritas. Both vendors have their roots solidly in storage, with EMC a top-tier provider and Veritas starting life selling storage-management software but quickly broadening its offerings to include clustering technology and more.For all storage vendors, the goal in terms of utility computing is storage virtualization. Virtualization means creating pools of bulk storage that can be flexibly tapped for various needs of the enterprise. The storage-pool boundaries are usually determined by performance and availability (and therefore price). In a virtualized environment, at least in theory, it's possible to move blocks of stored data between pools based on organizational requirements for that data (see "Nice, Neat Storage: The Reality,"). Both EMC and Veritas are creating tools that simplify the management of virtualized environments. But that's where the similarity ends.

Because EMC provides both storage systems and management software, it can offer not only virtualization capabilities, but also pay-as-you-go pricing on its storage systems and infrastructure. This enhancement to EMC's OpenScale architecture permits a metered payment of resources used. EMC goes an extra yard by metering storage usage and the use of devices such as SAN switches. This two-pronged approach will likely be favored by high-end hardware providers seeking to hang on to customers.

Like EMC, Veritas is seeking to improve its virtualization offerings. But it's taking a multivendor approach, letting customers choose the storage hardware. Commodity-priced storage arrays teamed with Veritas software can provide a more affordable alternative to offerings from vendors like EMC. If you're skeptical about the Veritas approach, consider how nicely it mirrors the Microsoft/Wintel dynamo that now dominates our desktops and low and midrange servers. EMC understands this threat and has recently acquired Documentum, a document-management company. This gives EMC a little more breadth--but only a little.

For Veritas, the acquisitions this year of Jareva Technologies, a start-up developing automated server provisioning apps, and Precise Software, a developer of server-management and application performance-monitoring tools, make its proposition even more appealing. This combination of storage, server and application-management tools aligns well with the goals of utility computing. Veritas refers to its plan for data-centerwide management as "Just In Time Computing." Veritas' short-term challenge is to mature and integrate the tools it's acquired. If it does a good job, customers will see a level of heterogeneous, broad-based data-center management that's unlikely to come from HP, IBM or Sun, which have a parochial interest in supporting their hardware first and competitors' a distant second.

There's no doubt that all the vendors we mention are serious about empowering IT to do more while simplifying the way it is done. IBM, however, argues that hard problems aren't easy to solve, pointing out that heterogeneous environments with legacy equipment and entrenched IT practices make it impossible for any vendor to waltz in with a box of software, leave it on the IT director's desk and consider the customer satisfied. Believing that organizational dynamics need to change at least as much as systems and software, IBM beefed up its vaunted Global Services arm with the purchase of PriceWaterhouseCoopers Consulting. The company's On Demand strategy now includes more than computing.

The flip side of the PWCC acquisition, which gives IBM a reach into the highest echelons of the corporate suite, then becomes: Do you want to get your business strategy advice from a vendor whose main business is to sell you other systems and services? After all, an independent consultant is supposed to be independent. IBM is never the lowest price, is often not the best of breed and is rarely the first to market, but it does cover the big picture, from desktops to high-end servers, storage systems to OSs, and management software to Web services middleware. It also has unparalleled professional services to help large organizations accomplish their goals.HP offers the alternative. It couldn't acquire an entrenched consulting organization, and it doesn't own a Web services offering or a viable database. So instead, you get a package of HP, Oracle, BEA Systems and Accenture.

Which route is better? That depends on who's asking the question--the single vendor versus best-of-breed preference will come into play. What's critical is that your vendor sees the big utility-computing picture, including consulting and applications. The goal is to create a more flexible and cost-effective IT department, and that likely means stem-to-stern re-engineering.

Here Comes the Sun

Dell and Sun are the other two obvious contenders in the big-picture view. Dell, much to its credit, doesn't attempt to be the end-to-end provider. It's sticking to what it knows: high-quality, commodity-priced hardware. Sun, unfortunately, comes off looking a lot like EMC. Sun's story is too much about proprietary hardware and software, where the value proposition is the two components working well together. But in a heterogeneous world, that leaves Sun sitting out of much of the utility-computing game.

The company disputes this, pointing out that its N1 architecture and Sun Fire blade platform accommodate both x86 blades and 64-bit SPARC blades. Sun's story doesn't sound bad, but its notoriously poor support for non-SPARC, non-Solaris architectures will leave most buyers gun-shy. Evidence can be found right on Sun's N1 solutions Web page, where SPARC-based blades get their own page with complete product descriptions, while the x86 blade gets only a mention of its existence. Not very reassuring, and you'd think by now Sun would know better.

We've managed to get all this way without mentioning Microsoft, but if you think the Redmondians are sitting on the sidelines of utility computing, you don't know them very well. Microsoft's answer to utility computing is its DSI (Dynamic Systems Initiative), which--to the cynics among us--looks an awful lot like the company's XML initiatives all rolled together as a utility-computing solution.Microsoft says that what all the other vendors are doing is good, but really only a Band-Aid. What's required is a ground-up redesign of development tools, applications and operating systems so that they are aware of their operating environments and can work directly with management tools to determine needs and reallocate resources. That's right, turn out the lights and fire the IT staff. The computers will run themselves. Anything short of that ground-up redesign will leave the utility-computing dream wanting, at least according to Microsoft.

As part of its DSI, Microsoft offers SDM (System Definition Model), an XML-based language that will be used by apps, OSs and management software to communicate the aforementioned environment-specific information. Got that? Basically, Microsoft is using XML to define a way for an application to describe its own system requirements. It's a simple concept, and it makes a lot of sense. But there's a catch.

The catch is what's required to reach this promised land. Not surprisingly, Microsoft says the first step is to wholeheartedly embrace Windows 2003 and its associated apps, because they contain the first steps toward the DSI utility-computing vision, including things like automated server software image-creation and deployment tools. But that's just the sizzle. The real meat comes in the next version of Windows, code-named Longhorn. When will Longhorn be here? Probably best to check with Vegas oddsmakers, but this is the typical three- to five-year strategy for Microsoft.

So, as usual, Microsoft insists the best thing to do is immediately adopt the version of Windows it's releasing now and wait for the real benefits of its plan to be delivered in the version that is on the drawing board. None of this legacy system crap! If you want the benefits of utility computing and a dynamic data center, you'd better be in lockstep with the latest and greatest.

This is a story we've heard repeatedly from Microsoft, but just like with a 5-year-old who wants to read the same book over and over again, it doesn't mean the story isn't good. The difference is all about audience. If there is one job that Microsoft has done well (perhaps too well), it's simplifying the deployment of its OSs and apps, a stated goal for utility computing. Customers can buy some industry-standard hardware, pop in a CD, accept all the Install Wizard default options, and have a fully functioning server and app in a hour or so. It isn't necessarily optimized, and it certainly isn't secure, but it is up and running.To the big-enterprise crowd, this capability isn't all that important. Their needs are complex, and IBM, HP and Sun are there to help. But for small and midsize companies, ease of use is what it's all about, and for them, Microsoft's utility-computing story is powerful.

No matter which approach makes sense to you, realize that utility computing is a journey that we're just starting. A healthy dose of skepticism is warranted. This is a time of little steps, with broad-based benefits coming in two to three years. Dip your toe where appropriate, but don't jump in until the pool fills up.

Art Wittmann is a Network Computing contributing editor. He was previously the editor of Network Computing, and has also worked at the University of Wisconsin Computer-Aided Engineering Center as associate director. Send your comments on this article to him at [email protected].

Post a comment or question on this story.

Utility computing won't happen overnight. Here are the steps the industry must take to reach this nirvana.

Management tools must:• Provide a better picture of how resources are used;

• Simplify and/or automate repetitive tasks within IT disciplines--for example, moving a database or deploying a new server OS;

• Automate resource allocations to existing applications--for example, increasing storage allocations on the fly and provisioning new servers as application demand dictates;

• Automate repetitive tasks across IT disciplines based on business logic demands--for example, holistically provisioning storage, server and network resources for new applications and users.

Computer and storage hardware must:• Provide consistent modular, self-diagnosing, self-healing resources;

• Virtualize systems to provide a continuum of performance levels and price points within the local IT infrastructure;

• Provide higher-level virtualization to enable a global continuum of performance levels;

• Provide global access to unique computing resources on demand (read: grid computing).

Application software must:• Provide standardized hooks for management tools to assess performance and resource utilization;

• Revamp application design and licensing policies to allow for more flexible deployment;

• Allow automated resource management on the fly;

• Develop virtualization-friendly applications with the ability to reconfigure and redeploy themselves based on business logic.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights