The Inevitability of Private Cloud

Private cloud may be inevitable, but diving into cloud without a plan is a terrible idea. And even with a technology plan, there are also two massive cultural problems to overcome: the buy-big syndrome and the belief that servers are not software.

Jonathan Feldman

February 15, 2012

7 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Back in August 2011, I wrote that the implementation risk of private cloud for enterprises outweighs the benefits. I still think this is true if an organization rushes into private cloud. But, fast forward to today, and I'm starting to believe that there's a certain inevitability to private cloud infrastructure as a service (IaaS), and that this, when planned for and executed well, will be more beneficial to enterprise organizations that choose to add capital equipment than if they simply add to their buckets of virtualization.

Enterprise private cloud infrastructure is inevitable. I never thought I'd be writing that statement, but think about it. Virtualization by itself is so very 20th century. I was getting into email flame wars about whether virtualization was a fad back in the '90s! (I still have an email from a well-known analyst who insisted that virtualization would never be used in disaster recovery scenarios. We all know how that one turned out.)

Why is private cloud infrastructure inevitable for today's enterprise IT shops? First, it's where the puck is going, not where the puck was. Successful, innovative companies like Amazon and Zynga have clearly proven that this is the future of computing. Second, the benefits are all there. The agility that private IaaS offers is unparalleled, and while it does introduce complexity, it also introduces abstraction and management that helps to deal with that complexity. Some organizations are going whole hog into public cloud, but the largest organizations are almost certainly going to want some "owned" infrastructure, if only to cut costs. Based on my experience of pricing non-burstable (constant-on) infrastructure versus public cloud, the ROI of private cloud makes sense in some instances.

There are tangible benefits to having "servers as software." What do I mean by that? Servers should be destructible and rebuildable via automation, not by some dude who is bringing up a virtual machine and clicking next-next-next. At a talk that I gave last year, I did a demo to show how quickly I could bring up a fully configured cloud server, versus how fast someone in the audience could bring up a virtual machine. In an irony not lost on me, the person who was supposed to bring the media and license keys for the demo did not, and the install-to-virtual demo was a flop. But this experience actually made the point: Virtualization without cloud automation must rely on humans, who aren't all that reliable. Yes, I know that appliances are available, but most of them still need some configuration to work in your environment. A scripted cloud instance mostly does not.

Server destructibility leads to the Two Big Benefits of enterprise private cloud infrastructure. The first is that, while you do tend to invest more in storage with cloud infrastructure (it's not uncommon to configure servers with mirrored storage so that you can get as many "reads" out of your storage as possible, to make up for the delays introduced by abstraction of storage), the destructibility of servers makes folks think differently about, for example, running batch servers 24 by 7. For example, at my organization, we run some custom reports every week on a stand-alone server. These, and other reports, bog down the server so much that it really has to run on its own. Sure, we run other reports, but the server itself is in use perhaps 20 hours a week. While one can script hypervisors to bring machines up, wouldn't it be better to destroy the server (and release all of that storage) when the server's not in use? Of course.

Second, there is nothing that I or other CIOs would love to see more than the death of the "fragile artifact" in the enterprise. You know what I mean--the server that only Jonathan is familiar with. The server that, if it fails, nobody but Jonathan knows how to rebuild. That server, and others like it in your infrastructure, are ticking time bombs. One day they're going to go belly up, and Jonathan is going to be on vacation or otherwise occupied. Cloud infrastructure, when deployed right, makes it so that there are no more fragile artifacts or non-repeatable builds.

Still, there are real-world problems associated with moving to private cloud. As we used to experience with SAN storage, there are still vendors in the stone age that won't support a given platform on a given hypervisor. So, if you decide to use CloudStack, which doesn't support Hyper-V, you'd best invest in VMWare because the chances of your legacy Win32 app vendor supporting Xen or KVM are pretty low.

Of course, back when I was a data center manager, we used to have enterprise app vendors refuse to support us because "the app is running on SAN storage." One of my sysadmins then lied to the vendor and told them, "OK, it's back on physical storage." Then the vendor fixed the app problem, oblivious to the fact that it was still running on a SAN. I'm not condoning this behavior, but I am saying that with time, vendors will stop saying dumb things like, "Only supported on the following hypervisor ..."

Private cloud may be inevitable, but diving into cloud without a plan, as our latest InformationWeek report details, is a terrible idea. And even with a technology plan, there are two massive cultural problems to overcome in today's happy-as-a-clam, big-bucket-o'-virtualization IT shops.

First is the buy-big syndrome. Enterprise IT has significant organizational memories about how terrible it is to use generic hardware. They've been burned in the past by using white-box machines. So, the notion that cloud's secret sauce is in the software, not the hardware, is a difficult one. Everybody still wants to buy big. Can you say "EMC, VMWare and Cisco?" And, indeed, Vblocks has made significant inroads with those that service enterprise IT (like CSC), but if you were to create a graph of Amazon's or Google's cloud footprint versus all of the Vblocks' footprints combined, I'm not sure if you would even see Vblocks.

The point is, is completely unnecessary for cloud computing to buy big when it comes to hardware. The resiliency is in the software, not the hardware. This is very tough for current enterprise IT pros--with Dell, EMC, IBM and Cisco in their data centers--to understand. News flash: Hardware doesn't matter all that much if the software plans for failure and enables resiliency. The ROI equation looks much different with name-brand hardware involved.

Second is the belief that servers are not software. Cloud means that servers are software. Infrastructure folks reluctantly accepted virtualization. Though, I still remember my staff scornfully saying, "Uh, Jonathan, you realize that virtualization means more than one server is running on the hardware!" That same crowd also completely disbelieves that it's possible to destroy and construct servers using code. "Uh, Jonathan, you realize that deleting an instance means that you're destroying the root volume of the server, right?"

There is more belief in manual process with the enterprise infrastructure crowd than you'd like to think. I am pretty sure that the right thing to do here is to turn over cloud infrastructure to the developers at your organization, bypassing the infrastructure crowd. They totally understand automation and building something via code. The dev-ops proposition, however, is fraught with its own cultural clashes.

But if you work at an enterprise, it's time to start thinking about how to overcome these obstacles. Sure, for small organizations, it's likely that public cloud will constitute 95% of operations. But for large organizations that covetously guard the family jewels, private cloud's promise of no fragile artifacts and better use of resources will turn what data centers are left into much more efficient operations. You just need to figure out how to modernize your staff's belief systems.

About the Author

Jonathan Feldman

CIO, City of Asheville, NC

Jonathan Feldman is Chief Information Officer for the City of Asheville, North Carolina, where his business background and work as an InformationWeek columnist have helped him to innovate in government through better practices in business technology, process, and human resources management. Asheville is a rapidly growing and popular city; it has been named a Fodor top travel destination, and is the site of many new breweries, including New Belgium's east coast expansion. During Jonathan's leadership, the City has been recognized nationally and internationally (including the International Economic Development Council New Media, Government Innovation Grant, and the GMIS Best Practices awards) for improving services to citizens and reducing expenses through new practices and technology.  He is active in the IT, startup and open data communities, was named a "Top 100 CIO to follow" by the Huffington Post, and is a co-author of Code For America's book, Beyond Transparency. Learn more about Jonathan at Feldman.org.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights