Virtualization’s Urgent Next Step: OpenFlow Networks

Networking needs its own version of VMware, and the OpenFlow protocol is the leading candidate to eliminate a network's hard-wired characteristics.

Charles Babcock

July 14, 2011

7 Min Read
NetworkComputing logo in a gray background | NetworkComputing

There's a chokepoint in the process of automating the virtualized data center. Sure it's quick and easy to spin up a virtual machine, and once running, you can even re-allocate memory, CPU, and storage from the management console. But when it comes to assigning or re-assigning network bandwidth, good luck.

Networks can be virtualized, but most are not. Network infrastructure springs from hardware design concepts that predate the concept of virtualization. Even the "virtual" local area network must be defined in hardware, when it should be set in software.

In many cases, switches, routers, and controllers depend on their embedded spanning tree algorithms. Spanning tree is an arithmetical determination of the most efficient path for a message to follow in a given network and results in a hierarchy of devices. After the mapping is implemented, not a lot more can be done. Spanning tree is built in and each device knows its place in the map. That leaves the network calcified in place, unyielding to momentary demands and resistant to taking on an assignment of different characteristics.

As sleeping virtual machines come to life, or reach peaks of operation during the workday, it would be nice to be able to allocate and reallocate their network resources, based on where the traffic is. For that matter, when a new virtual machine is spun up in three minutes, it would be great to get its network allocation at the same time instead of waiting three days for a network engineer to rig its network services. Why can't automated policies be applied to the VM's networking, like those for CPU, memory, and storage? Those policies are already in place and ready to be implemented at the moment of a VM's creation. Why is networking a holdout?

The reason is there are still barriers to treating networking as a programmable or software-defined resource. The virtualization phase that has swept through the data center is still washing up on networking concepts expressed as devices hard-wired together. What's needed is to free the network from the underlying hardware. It should be a more malleable entity that can be shaped and reshaped as needed. Part of the purpose for lifting the network definition above the hardware is to escape the limitations of the spanning tree algorithm, so useful in its day, now outmoded by virtualization.

Escape from spanning tree can be done if your network vendor supplies virtualization management atop its hardware devices, but then you must use one vendor's equipment exclusively. Most enterprise networks are a mix, making virtualization harder to implement. So what's really needed is a separation of the switching control plane, currently the spanning tree algorithm's fiefdom, from its direct link to the hardware. By allowing a new set of policies or flexible algorithms to run the hardware, we can adapt the network to the world of virtualization.

One way to do so is to adopt a much more flexible approach to network algorithms called OpenFlow, from a research project at Stanford. The difference between spanning tree and OpenFlow is summed up in a statement by Guido Appenzeller, CEO of the startup Big Switch Networks and a director of the Clean Slate research project on network redesign at Stanford: "I don't want to configure my network. I want to program my network," he said in an interview. Appenzeller said he's just echoing a statement he heard from James Hamilton, distinguished engineer at Amazon Web Services. There is an awareness of how applications have been separated by virtualization from the hardware and a widespread wish that something similar would happen with networks.

"Networking needs a VMware," said Kyle Forster, a co-founder of Big Switch and vice president of sales and marketing. Big Switch was created to fill that ambition--founded to take advantage of the OpenFlow protocol that came out of Stanford's Clean State project in 2008 and backed with $11 million in venture capital, for starters."We have a kind of perfect storm in the networking sector. There's an incredible opportunity for startups," said Appenzeller.

Big Switch and another firm founded from Clean Slate researchers, Nicira, and the major virtualization vendors Citrix Systems and VMware are beginning to form a consensus that the best way to get to virtualized network resources is through OpenFlow.

OpenFlow is a protocol that opens up each network device's flow table to a remote controller, which can send it instructions on a dynamic basis that assign and reassign bandwidth and other network characteristics, including security. OpenFlow would replace spanning tree in each switch and router. It's not the only approach, but at the moment it's the best possibility with the widest backing.

The Open Networking Foundation has been created to oversee the ongoing development of OpenFlow. Founding members include Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo. In addition to Microsoft, virtualization vendors Citrix Systems and VMware also are part of the foundation, and they see the problem clearly, partly because they are heavily invested in the idea of a flexible, elastic set of resources in the data center, instead of hardware. They solved their part of the problem--networking within the virtualized server--with a software switch in the hypervisor. They know the difference between a software-defined network and a hardware-defined network.

Of the two, Citrix is the more outspoken on the value of a software-defined versus hardware-defined network: "OpenFlow has the right mojo and the right design principles baked in to be one of the biggest things that will show up in networking in the next five years," predicted Sunil Potti, VP of product management for the Networking and Cloud Group at Citrix, in an interview.

Citrix has worked closely with Nicira, one of the first to produce a commercial OpenFlow supporting switch, the Nicira Open vSwitch. Eighteen months ago, Randy Bias, CEO of Cloudscaling, wrote in his blog about a Citrix decision to include the Nicira Open vSwitch as the software switch in its private cloud software stack. Bias said: "Nicira is commercializing the OpenFlow switch specification. OpenFlow is a very important change in the way we build, design, and manage network infrastructure."

In response to a query, Bogomil Balkansky, vice president of product management at VMware, said: "VMware is one of 80 participating members of the OpenFlow Networking Foundation that supports the advancements of OpenFlow. We are actively monitoring the space as the standard continues to mature to meet the demands of our customer base. We agree with the problem statement OpenFlow aims to solve and believe programmable networks will be the way of the future."

This is less concrete than Citrix's action. "Actively monitoring" can also mean sitting on the sidelines, although VMware built OpenFlow characteristics into its hypervisor software switch. I should reserve the "sitting on the sidelines" descriptor for Cisco's ONF membership. OpenFlow, if it takes hold, will be disruptive to existing networking vendors. VMware certainly sees the virtualization potential of OpenFlow.

There are still concerns about OpenFlow. One is security. The fixed network has certain security attributes that seem to go out the window if we make the forwarding tables of different vendors network switches available for programmatic direction. It's not clear how OpenFlow will allow for sufficient security to replace what's been lost, or how many new exposures it opens up.

Nevertheless, OpenFlow has emerged as the best path to the next phase of data center virtualization, the addition of the network as another of the pooled resources available for the virtual machine. As it becomes more widely adopted, it will represent a giant step toward that flexible, adaptable computing resource we hope one day to have at the heart of every business.

Charles Babcock is an editor-at-large for InformationWeek.

Data centers face increased resource demands and flat budgets. In this report, we show you steps you can take today to squeeze more from what you have, and also provide guidance on building a next-generation data center. Download it now.

About the Author

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights