Network Computing Contributing Editor Kurt Marko came out with a contrarian view during yesterday's "Data Centers: The Next 12-18 Months" panel at Interop: The data center you can build won't be as feature-rich as a co-lo facility that can focus on bringing in sufficient power, cooling and networking, and ensure that it's all redundant and fault tolerant.
You can expand and contract at a co-lo facility far more efficiently than you can when you own the data center, Marko explained. Besides, the cloud gold rush is causing lots of facilities to be built before demand emerges. That means there may be lots of cheap space in the next few years.
IT was talking about co-lo 20 years ago or more. At the time, I don't think the remote management components were as sophisticated as they are today, which made on-site visits much more likely. But with the ability to manage servers from the hardware using BIOS functions like ILO and DRAC on up to the OS remote desktop, there's little need to actually touch the hardware once it's racked, cabled and powered on. With a little ingenuity, you could actually work around hardware failures until you can get someone on site.
One of Marko's points is that the increasing density of computing (all of it--servers, storage, networking, and so on) leads to increased power and heating requirements. You probably can't retrofit enough cooling--including getting the cold air where it's needed and removing the hot air--into an existing raised-floor data center. You can augment your CRAC systems with in-row or even in-rack cooling, but it's more expensive and really just stretches the problem out. In fact, one of the most economical things you can do in your existing data center is enforce hot- and cold-aisle separation using curtains at the end of the row. Just make sure your racked equipment is oriented hot and cold.
There's power to manage. Even though power consumption may be going down on a per-unit (core, port, drive, and so on) rate, the fact is you're likely expanding your equipment at a faster rate than you're seeing savings from more efficient hardware. You still end up paying for more power and more cooling. There haven't been many advances in more efficient power distribution, other than going all DC, which saves the loss incurred by an AC/DC conversion as well as the transmission loss with AC current. If your area suffers a power outage, how long will your data center last? How much battery backup do you have? Larger-sized data centers use multiple power strategies from batteries to on-site generators, which keep the equipment running. Since power management is critical to operations at a co-lo facility, which is working in a scale larger than you, chances are it will have more reliable power management.
There's also the networking angle: How much will it cost you to have a 1Gbit, multiple-Gbit or even a 10-Gbit WAN pulled to your site? A lot, if you can get one of those high-speed last-mile links from your service provider. Midsize to large co-los already have capacity built in and can add more at less cost than your average IT shop. The really large co-los have multiple service providers and can get service turned up quickly--certainly more quickly than what your provider will do for you.
The hard numbers may not show much--or any--benefit if you move to a co-lo facility, but the benefits of better power, cooling and networking may offset the costs compared with doing it yourself. A comment that Sam Barnett, directing analyst, data center and cloud, at Infonetics Research, made last night at dinner caught my attention: The big service providers are building out cloud facilities in anticipation of demand. He said he sees similarities to the WAN build-outs in the late-90s/early-2000s, which left miles of fiber run but unlit. If you wait a few years, co-lo space may come down in cost if the anticipated cloud demand doesn't materialize.