Of course, back when I was a data center manager, we used to have enterprise app vendors refuse to support us because "the app is running on SAN storage." One of my sysadmins then lied to the vendor and told them, "OK, it's back on physical storage." Then the vendor fixed the app problem, oblivious to the fact that it was still running on a SAN. I'm not condoning this behavior, but I am saying that with time, vendors will stop saying dumb things like, "Only supported on the following hypervisor ..."
Private cloud may be inevitable, but diving into cloud without a plan, as our latest InformationWeek report details, is a terrible idea. And even with a technology plan, there are two massive cultural problems to overcome in today's happy-as-a-clam, big-bucket-o'-virtualization IT shops.
First is the buy-big syndrome. Enterprise IT has significant organizational memories about how terrible it is to use generic hardware. They've been burned in the past by using white-box machines. So, the notion that cloud's secret sauce is in the software, not the hardware, is a difficult one. Everybody still wants to buy big. Can you say "EMC, VMWare and Cisco?" And, indeed, Vblocks has made significant inroads with those that service enterprise IT (like CSC), but if you were to create a graph of Amazon's or Google's cloud footprint versus all of the Vblocks' footprints combined, I'm not sure if you would even see Vblocks.
The point is, is completely unnecessary for cloud computing to buy big when it comes to hardware. The resiliency is in the software, not the hardware. This is very tough for current enterprise IT pros--with Dell, EMC, IBM and Cisco in their data centers--to understand. News flash: Hardware doesn't matter all that much if the software plans for failure and enables resiliency. The ROI equation looks much different with name-brand hardware involved.
Second is the belief that servers are not software. Cloud means that servers are software. Infrastructure folks reluctantly accepted virtualization. Though, I still remember my staff scornfully saying, "Uh, Jonathan, you realize that virtualization means more than one server is running on the hardware!" That same crowd also completely disbelieves that it's possible to destroy and construct servers using code. "Uh, Jonathan, you realize that deleting an instance means that you're destroying the root volume of the server, right?"
There is more belief in manual process with the enterprise infrastructure crowd than you'd like to think. I am pretty sure that the right thing to do here is to turn over cloud infrastructure to the developers at your organization, bypassing the infrastructure crowd. They totally understand automation and building something via code. The dev-ops proposition, however, is fraught with its own cultural clashes.
But if you work at an enterprise, it's time to start thinking about how to overcome these obstacles. Sure, for small organizations, it's likely that public cloud will constitute 95% of operations. But for large organizations that covetously guard the family jewels, private cloud's promise of no fragile artifacts and better use of resources will turn what data centers are left into much more efficient operations. You just need to figure out how to modernize your staff's belief systems.Jonathan Feldman is Chief Information Officer for the City of Asheville, North Carolina, where his business background and work as an InformationWeek columnist have helped him to innovate in government through better practices in business technology, process, and human ... View Full Bio