Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Interop Data Center Chair Jim Metzler On Networking: Page 4 of 5

NWC: Where do you see things like UC, Streaming Video, and all of that, coming into the discussion?
Metzler: It does and it doesn't. There's no question those are important applications, but in terms of the architecture of the data center, I don't think I see that as a driving force, or being any more compelling than saying we have to flatten the data center, or anything like that. I think what you're going to get with that is people would argue for services, for QoS and security, more streaming media in general, an argument for more QoS in the WAN, and IT shops are starting to be more receptive to that. Certainly one more factor is filling up the pipes, always driving for higher capacity, certainly 10Gig in the core...those kinds of architectural things, the speed of the pipe, the need for better QoS, and in my wildest fantasies better management end-to-end so that people have an idea of what's going on.

NWC: So in your wildest fantasies, what's better end-to-end management?
Metzler:  Let's just say that in today's traditional environment, with apps on physical servers, almost always in that case, if the application is degrading, it's noticed first by the end-user and not by IT. That's been the case for a number of years, it's the case today, in that relatively simple environment. Let's go through this in steps:

In that environment, you have a problem. In the environment we're moving to, with more virtualization, now if all the servers are virtualized, say, with multiple vendors. Now, when the application is degrading, you can't troubleshoot in the traditional way, you need the ability to determine if not only is the application degrading, but maybe the root cause could be how the VMs are behaving, sucking CPU power or whatnot, which is why part of the application that the VM provides is suffering, and so you need to deep dive inside of the physical server to have all of that data on a per-VM basis, so that can be a challenge.

Continue out to the cloud, and in a hybrid cloud example, where that Web tier is being hosted by a cloud computing service, probably in a lot of places. So you know have to gather data from branch offices, wired and wireless LANs, or maybe the NPLS vendor that connects the branches to other facilities, and the web server is doing done on virtualized servers in two or three data centers from a cloud provider, and the enterprise is providing the app on a database server, and you need to pull all of that together. That's not going to happen this year. Or next year. It's an order of magnitude more challenging than the environment we're in today, and we don't do a good job with troubleshooting what we're doing today. It's not that weird of an example.

NWC: No, it's not. We've often discussed on NWC how virtualization adds a whole new wrinkle.
Metzler: It does. With cloud, you have multiple organizational domains, and within those, multiple technologies, and even within the server domain, a level of information never given before. I'm doing another Interop Session on How to Manage Public Cloud Computing Services, because in my mind, this is a very key issue, and the road to cloud is a wonderful road, and the pundits are always happy and singing Kumbaya, and whatever, but if we say, yeah, you get the advantages of cloud, but the price is that you really can't manage this thing, and you have to hope that it works as well as a sandbox, that's really not a very good plan.