Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Systems Management Dilemma: How Much Is Too Much?: Page 2 of 2

VMware's big data approach--putting sophisticated analytic software at the top of the performance-data feed chain so someone other than a human can crunch the infinite supply of numbers--does offer a way to include all the relevant data in a decision, rather than just the data one admin can comfortably absorb.

It also offers tools to search for performance trends or reconfiguration options for which most admins don't have the time or energy to search.

If you're overwhelmed by the volume of information your IT infrastructure is sending, what better solution than the same kind of analytics being developed to spy out unrealized buying habits among a company's customers, or hidden weaknesses of character that can be exploited to pull in a few more dollars.

That, ultimately, is what VMware is after, or at least the function it wants to support. VMware's big-data acquisitions do give it the tools to sift through piles of generic system updates, but their main purpose is to give customers a more complete, more accurate and more insightful view of the IT infrastructure and the tremendous potential in its ability to be reconfigured on the fly, according to VMware's strategy announcements at VMworld in San Francisco last week.

VMware doesn't want to offer a clearer picture of a company's network of VMs; it wants to offer a clear picture of the whole infrastructure and the tools to let customers define their own clusters, data centers, storage, networking and other resources with even more abandon than they're supposed to be able to do with cloud and virtualization technologies on their side.

VMware's acquisition of Nicira gives it tools to virtualize physical networks and reconfigure them on the fly.

Its acquisition of DynamicOps gives it the tools to let customers manage and reconfigure multiple clouds, virtual networks and virtual data centers from a central management station.

VMware wants big-data analytics to improve the way customers absorb, filter and act on performance data products that it already supplies. Its ambition is much greater than that, though.

The need for a more efficient management infrastructure isn't secret from Allwyn Sequeira, VMware's current CTO, who posts niftily un-detailed architectural diagrams but also blogs about how software-defined data centers, virtual networks and virtual infrastructures can be fitted together most efficiently.

Sequeira sees not only the need for lots of automation, but also for more abstraction of network ports, switches, firewalls and other resources; the ability to add new services to existing networks or data centers; and the ability to pool real or virtualized resources to assign them where they can do the most good.

Self-Healing Infrastructures vs. Big Data?

Kliger makes a lot of sense when it comes to self-healing infrastructures and management systems that don't just run to a human with every minor problem. However, I don't see how VMware or anyone else can build the kind of complex, dynamic IT infrastructure that it's talking about without being able to process so much performance data that the result is a big-data problem, whether it should be or not.

On the other hand, I also don't see how it would be possible to run that kind of complicated mesh of rapidly changing nodes, networks and applications without being able to automate some pretty sophisticated provisioning and control steps.

I typically hate news stories and analyst reports whose conclusions are that such-and-such a phenomenon is significant but mysterious, and it will be interesting to see how things play out.

In this case, both big-data analytics and the whole mesh of factors underlying the still-nascent idea of the software-defined network (let alone the software-defined data center) are still too immature and incomplete to allow for much accurate projection at all.

If I had to bet, I'd say that in three years the standard virtualized enterprise will use a management infrastructure with trouble-shooting and problem-solving software built in at several different points in the organizational hierarchy.

It will almost certainly also have the ability to capture, store and analyze real-time performance data from a central point at the very top of the hierarchy to give CIOs and senior IT managers some kind of coherent picture of what's going on with their far-flung data-processing empires.

It's a wishy-washy conclusion, for which I apologize. It's always more satisfying to be able to say one side is right and the other wrong in any dispute.

Unfortunately, in politics as in technology, science, medicine or religion, if there is an answer to the most complex questions, even the simplest accurate answer will depend on layers of complexity that have to be shifted and reconfigured to make a simple answer even possible, let alone practical.

It may not be true in other hot technology arenas, but in virtualized computing and big-data analysis, we're still at too early a stage to guess accurately how the two will come together and how close to the ultimate goal even the best combination of the two will get us.