Embryonic as utility computing may be, it's been the subject of heated discussion for at least two years. In mid-2000, Hewlett-Packard touted a novel way of pricing its then-new HP9000 Superdome line of high-end Unix servers: HP would let customers "pay as they grow" rather than require up-front payments. Although HP described this as "utility pricing," we prefer to call it expansion pricing: As usage went up, so did the cost. But if usage went down, the bill didn't. Still, HP did use its financing arm to offer (mostly very large) customers innovative pricing flexibility; it now calls this initiative iCOD (Instant Capacity On Demand).
Since then, HP and other sellers of high-end servers and storage systems have continued to create more flexible purchasing schemes that are lumped under the utility-computing header. For example, HP now offers three nonconventional purchase options for its server and storage systems. Besides iCOD, there's Managed Capacity, where resources and management expertise are purchased on a per-use basis; and Metered Capacity, which is essentially iCOD with the ability to dial down capacity (and costs), then dial them up on demand.
Each of these options is interesting and makes sense for at least some customers. However, be aware that every vendor who offers this type of purchasing option is doing so with an eye on the bottom line. In fact, most of these programs are offered through the vendor's financing arm, and those financing operations have their own bottoms to watch. The decision is a lot like whether to lease or buy a car--it all depends on the buyer's circumstances. Risk management should dictate your direction more than a simple attempt to lower purchase costs.
That's because these new purchasing models lower risks in two ways. First, companies with highly volatile and unpredictable usage patterns can work out deals where payments for resources approximate their usage and, therefore, their income. Second, every time equipment in the field needs an upgrade--scheduled or not--the vendor and owner are incurring costs and risk. The cost comes in the new hardware and in the human-resource costs of sending technicians out to do the work. The risk side of the equation is that every time you open up a computer or storage system to install new hardware, something could go wrong. Loading up a system with more resources than a customer needs can be very effective for vendors in terms of managing their field staff and risk costs.
So far, we've been talking about high-end systems--high-value, unique resources, such as very large database servers or large centralized storage systems. These are high-margin items for vendors, so doing some creative financing to retain old business and win new is all part of the game. If that were all there was to utility computing, it would hardly be the next big thing. In fact, we think that in the short run, this pricing-game version of utility, or on-demand, computing is one only large IT organizations should be playing.
What's interesting for a wider range of companies is the effort to create hardware, software and management tools that let IT assets be allocated and reallocated quickly, efficiently and, eventually, automatically as need arises. This is the holy grail of utility computing and an area in which every vendor claims to play. But as with any new technology, the bigger the vision, the less compelling and immediately useful early offerings are. It's going to take three years or more to rearchitect software systems to fully incorporate the promise of utility computing.