Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Infoblox Opines On Infrastructure 2.0

The evolution of the data center and of the enterprise network will be the hot button issues of 2010. As virtualization explodes and networks continue to sprawl, there'll be an impetus to rein in complexity. I believe that the desire to more capably manage the technologies at issue--or, more precisely, to fit everything under an easy-to-grasp intellectual umbrella--is behind the emergence of marketing-inspired monikers such as unified computing (Cisco), integrated infrastructure (HP) and dynamic infrastructure (IBM).

The gentleman we're chatting with today, Greg Ness, senior director at networking automation vendor Infoblox, prefers the term Infrastructure 2.0. In fact, he's a founding member of the Infrastructure 2.0 Working Group, which held its initial meeting in September. The group is hoping to frame a reference architecture for Infrastructure 2.0. In our conversation, Ness was circumspect about the details of the group's work, but that's probably because they're still groping towards a definition of a problem statement, which outlines their mission and just what exactly they intend to deliver.

It's also possible that the reticence to dive into details--Ness says that the Working Group (WG) is currently in "stealth mode"--stems from concerns that non-member competitors might be disinclined to cede standards-making to the horse upon which the WG is riding in on. Still, the WG is nothing if not a collection of networking glitterati, making it a force sure to be reckoned with. Its founder is Interop creator Dan Lynch. Along with Lynch, Ness and Infoblox Chief Technology Officer Stu Baily, members include representatives from Cisco, TCP/IP co-designer Vint Cerf, Bob Grossman of the Open Cloud Consortium, and Yahoo's vice president of cloud computing guru, Surendra Reddy.

In my interview with Ness, he nicely framed many of the issues confronting the group as well as the next-gen networking arena in general. A quick aside: As for his enthusiasm for the automation of core network services, I get that Infoblox has some skin in this game. Yet that doesn't negate the value of what Greg has to say. Here's an edited transcript of our chat:

Network Computing: Let's dive right in. Define the issue, as you see it.

Greg Ness: If you look at networks today, the way they're operated, run and configured hasn't really changed for the last several decades. There's a contrast between today's networks and today's systems. The systems are getting increasingly automated, while the networks are still very much in silos. If I were to articulate this in one sentence, I'd say that today's networks are run like yesterday's businesses.

Network Computing: What's the impact?

Ness: If you wanted one other big-picture contrast, the CIO today is essentially blind to the status of the network and the operations infrastructure, when compared to the CFO, who has the real-time visibility. That contrast is becoming increasingly apparent as networks get larger and more dynamic, with more devices attached.

Network Computing: Yet network sprawl shows no signs of abating.

Greg Ness: When virtualization entered the production environment, it had two impacts. One, it enabled the growth in VLANs, because networks really weren't ready for virtualization, and what better way than to create these pockets which have security policies and where users can access similar types of applications? However, that's kind of run its course.

You have a second aspect, which is cultural. This is when people can see that they can create, spin up, and move up a server with a mouse click, and yet they have to wait for hours--if not days--for a server to be moved.

I was talking to a vice president of cloud for a large enterprise. He commented that it cost about as much to move a server as it does to buy a new one. There's something wrong with this picture. The gating factor is the static, manually configured notion of the network, which is decades behind the automation that's taking place in systems today.

Network Computing: This is going to emerge as a significant issue, because we're at the very beginning of people having to dynamically re-architect their networks, whether you're talking about enabling hybrid clouds or adding large numbers of mobile users.

Greg Ness: I agree wholeheartedly. Virtualization as a way to achieve capex [capital expenditure] savings has played itself out to a large extent. More and more people are looking to virtualization to address flexibility. But flexibility isn't delivered by greater and greater populations of VLANs, which are increasingly dense. That actually works against flexibility. What has to evolve is more intelligent connectivity between virtual infrastructure and the physical network infrastructure.

Network Computing: So maybe we can rephrase that old Scott McNealy line. It's no longer that the network is the computer. Now, it's that the intelligent network is the computing resource.

Greg Ness: As you see vendors entering the Infrastructure 2.0 space, they're going to be coming primarily from two directions. First, via a network-centric vision, where the network is more and more intelligent and capable of delivering more and more IT power. The other aspect is the system-centric view, where the network is viewed as dumb plumbing.

I think you're going to see these two ideas competing in the marketplace over the next five years. CIOs are going to be the overall winners. They're going to have a choice in direction. IT will be strategic.

Network Computing: So let's tackle the term Infrastructure 2.0. Is there a firm definition?

Greg Ness: Infrastructure 2.0 is essentially about the evolution of today's network from the old world of middlemen--an age of business where you had lots of people and paperwork and processes. We're now transferring from that age of IT to an age of automation.

Within that, you have multiple stages. You have the automation of core network services. You have the adoption of tools to build the connectivity mesh between objects connected to the network. The third stage is the evolution of policies in management. [The goal is to] have management and visibility over a unified IT infrastructure and all of the capabilities.

The best example I can think of has been the rise of supply chain, and solutions like Oracle, where the people at the top of the food chain within IT have much more real-time visibility into what's going on.

Network Computing: What could automated policies enable?

Greg Ness: Think about what's happened within the VLAN and within virtual infrastructures. Think of that now happening within and between cloud environments. For example, you could have policies that say: when a service provider increase their rates by a given amount, or when electricity goes up, move loads around to take advantage of that.

This isn't very practical today, but with the evolution of the intelligent mesh and fabric, and then management of that fabric, it's possible. Plus, lots of people are going to come up with solutions that we haven't even dreamed about yet.

Network Computing: You're on an Infrastructure 2.0 working group, of which you are a founding member [see more here]. What's its objective?

Greg Ness: We've met a couple of times and the objective is being worked on. Right now, it's a group of people drawn together by both the opportunity and the challenge.

I think the beauty [of Infrastructure 2.0] is that all of us from multiple perspectives are in essence talking about the same thing. Gartner calls it real-time infrastructure. IBM calls it dynamic infrastructure. [Our working group has] talked about it from a Cisco, Infoblox, and F5--and to some extent VMWare--perspective.

Network Computing: Give us your closing thoughts.

Greg Ness: The network will evolve. The question is, what's the nature of that evolution, and who benefits from it the most? There's a premium for those who solve this issue first. And that's the very reason I'm hesitant to predict the winner or say who's going to be the center of gravity at this point. But I think there's a lot encouraging things going on, both outside the working group, and within it.