Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

Data Center Pricing Schedules: Driving Positive Vendor Behavior

One of the most scrutinized and highly negotiated elements of an IT outsourcing (ITO) agreement for infrastructure is the resource pricing schedule. This forward-pricing matrix specifies data center management costs for billable resource units, such as server instances, database instances, network elements and storage capacity over the term of the relationship. Like a tax table or any other financial schedule, the ITO rate structure has a significant impact on the behavior of those governed by it--in this case, the service provider. To encourage efficient asset utilization, thoughtful definition of billable resource units and measurement, along with supporting contractual terms, are critical in fostering positive service provider behavior.

Variable usage ITO pricing cost is driven by both the data center pricing schedule unit cost and resource usage. The most common practice is to negotiate rates based on baseline resource assumptions and then adjust--up or down--each month based on actual consumption. In practice, limits on variance exist, with unit costs increasing at lower volumes and discounts applied for increased volumes. Of course, other charges and credits come into play, as well.

While this pay-per-use model is intuitively appealing, without diligent asset and vendor management, financial creep is a challenge in this billing approach. In fact, mid-term outsourcing relationship audits (for example, in year three of a five- to seven-year contract) have shown that many ITO clients are running at 20% to 30% over growth-corrected total spend expectations. It is critical to take the following preventive measures to proactively manage pricing overages and drive positive vendor behavior regarding resource efficiency.

1. Don’t reward resource "sprawl": The obvious downside of variable pricing contracts based on resource units is that they motivate the service provider to add resource units. In a perfect world, resources would be responsibly managed to competitive utilization targets. In practice, however, we see clients paying for idle and underutilized resources on a regular basis. Real-world examples include a client paying for backup resources (automated tape library slots) that were 9% in use, a server environment that was so poorly tuned it was incurring 30% overhead (net additional hardware) and a number of clients paying for storage capacity at less than 40% utilization. For outsourcing contracts with client-owned hardware and software, consider the service provider’s dilemma: Make unreimbursed investments in technology and tuning efforts to increase efficiency and drive down your revenue stream, or maintain the status quo and watch revenue grow with increased overhead. Although most contracts typically include some shared savings incentives and continuous improvement responsibilities, the contract language is often not specific enough to enforce.

To effectively manage the disconnect between service provider revenue generation and efficient asset utilization, end users should consider one or more of the following approaches to get the client-vendor relationship on track from the start

a.) Define billable resource units to align with real usage, not raw capacity.
In this approach, resource units are defined to reflect consumption, not capacity. As an example, the vendor charges only for allocated storage (the storage a server can "see" looking out from its I/O interfaces), not the available capacity of the installed storage array. While some per-unit rates may be higher, if scaled correctly, it is well worth the trade-off. For example, if a service provider is prepared to charge $100/terabyte/month for installed capacity and assumes it can get an effective yield of 60% for client-usable storage, it will seek a "usable capacity" rate of $100/60%=$167/terabyte/month. Going forward, the vendor will have to manage to 60% utilization to maintain initial profit margins. Were effective utilization to drop over time due to poor management, it would be at vendor expense, not client expense. The logic supporting this storage-based example extends to server utilization via exploitation of virtualization and standardization, network resource utilization, and so on.

b.) Define minimum utilization standards for all resource categories.
Here the contract will contain very specific resource utilization and efficiency metrics with penalties for underutilization, or billing relief for missed targets. Be very specific in defining not only what the utilization targets are, but also the specific tools and methods used to measure them. They should be measured on a regular basis--monthly is ideal.

c.) Retain architectural control and ownership of the resource definition and placement process.
In this very simple and direct approach, additions to resource pools are at the client’s discretion. The statement of work (that is, the description of services) contract exhibit specifies that retained client technical resources have the final word on when new resources are created, how they are defined and where they are placed. However, be aware that without control over resource utilization and resource placement, service providers may limit their accountability for service levels around performance and, to a lesser extent, availability.


Page:  1 | 2  | Next Page »


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers