The Biggest Thing Since Ethernet: OpenFlow

New technology could disrupt the networking market, which for decades has relied on Ethernet and TCP/IP standards--and stalwart vendors like Cisco.

Art Wittmann

October 6, 2011

9 Min Read
Network Computing logo

The combination of Ethernet and TCP/IP is so powerful and so fundamental to the way we craft data center systems that it's almost heresy to even suggest we move beyond those protocols. In the 1990s, Gartner famously predicted that Token Ring would supplant Ethernet by the end of that decade, and Novell created its own networking protocol, as did Apple, rather than take on what they saw to be the flaws of the overly complicated TCP/IP. And yet here we are today: Token Ring is a relic, IPX and AppleTalk are footnotes in the storied past of multiprotocol routers, and Ethernet and TCP/IP are the dominant networking technologies.

While no one in their right mind suggests completely replacing Ethernet and TCP/IP, anyone who's struggled to automate data center load management in today's virtualized data centers knows that current networking protocols present a challenge. For companies to make the most efficient use of their virtualized servers, they must move workloads around their data centers, but doing so implies moving network connectivity along with performance assurances, security, and monitoring requirements. Today, that's either impossible to do automatically, or the method for doing it is highly proprietary. And virtualization isn't the only challenge--as businesses add more applications to their networks, they need to address the unique needs of those apps at a policy level.

Quite simply: Networking must change if it's going to keep up with what businesses want to accomplish. Imagine networks that support both lots of live streaming video as well as financial and healthcare transactions at the core. For video, if a network gets congested, the thing to do is drop frames at the source. There's no point in delivering voice or video data late. Meanwhile, the network never should drop packets of financial data. A smarter high-level policy might be to define separate paths through the network for the two different types of data. In regulated industries, network designers may want to set policies that make it impossible for certain types of data to hit various parts of the network, or ensure that security appliances always look at some flows of sensitive data. Simultaneously, and possibly separately, IT architects will want to create policies to ensure that certain essential services are highly available and protected with a disaster recovery plan.

While it was possible to set up environments that support some of these policies when applications and services were tightly coupled with their servers, virtualization makes such a static configuration hopelessly outdated. Loads change and servers fail--and virtualization lets you deal with all that, but only if the network can respond to a layered set of policies that must be observed in a highly dynamic environment. Network configurations--just like virtual servers--must reconfigure themselves in the blink of an eye, and to do that, bridging and routing protocols must evolve.

So far, they haven't. Network engineers are still versed in the command line interfaces of the switches they run. Policies involve writing router rules and setting access control lists, usually by crafting them in proprietary formats, and then using scripts to apply those rules to devices across the network. Even where better tools exist, network designers can set the quality of service, VLANs, and other parameters, but the Layer 2 switching rules are set by Ethernet's Spanning Tree protocol and the routing rules are dictated by TCP/IP. There's little ability to override those mechanisms based on business rules.

At a conceptual level, the answer has been dubbed "software-defined networking," or SDN--letting network engineers specify configurations in high-level languages, which are then compiled into low-level instructions that tell routers and switches how to handle traffic. The idea is to give engineers more complete access to the lowest-level functions of networking gear so that they, and not TCP/IP or Spanning Tree, dictate how network traffic should move.

At the same time, engineers would work in a higher-level language to more easily describe complex constructs implemented as simple rules on a router or switch. It would be a lot like how a programmer writes in C++ or Visual Basic, and the commands are then compiled into the machine language of the processor.

How important are virtualization technologies to your company's overall IT strategy?

Departure From TCP/IP

In a software-defined network, a central controller maintains all the rules for the network and disseminates the appropriate instructions for each router or switch. That centralized controller breaks a fundamental precept of TCP/IP, which was designed not to rely on a central device that, if disconnected, could cause entire networks to go down. TCP/IP's design has its roots in a day when hardware failures were much more common, and in fact part of the U.S. military's Defense Advanced Research Projects Agency's intent in sponsoring the original research behind the Internet was to develop Cold War-era systems that could continue to operate even when whole chunks of the network had been vaporized by a nuclear bomb.

Today's needs are far different. Letting virtualized servers and other network resources pop up anywhere on the network and instantly reroute traffic as they do is far more important than gracefully recovering from router or switch crashes. Large enterprise Wi-Fi networks already make wide use of controller-based architectures, and the SDN concept is well proven there. Breaking into the data center and other core enterprise network functions is another matter.

Two major obstacles stand in the way of generalized acceptance of software-defined networks. The first is the absence of a technical specification that describes how hardware vendors should implement the SDN constructs in their products. That problem's easy to solve, and good progress is being made with the OpenFlow standard, first proposed by Stanford researchers and now on its way to becoming a recognized standard.

The second problem is tougher to solve because it involves convincing the likes of Cisco, Juniper, and Brocade--the three vendors of TCP/IP networking equipment to both the enterprise and to carriers and big-data Internet companies--that it's in their interests to participate in OpenFlow.

OpenFlow itself doesn't fully solve the problem of creating a software-defined networking environment, but it adds some important pieces missing from the existing IP network management and control protocols.

First, OpenFlow defines what a controller is and how it can connect securely to network devices that it will control. Second, OpenFlow specifies how a controller will manipulate a switch's or router's forwarding table, which specifies how incoming packets get processed and sent on. Previous to OpenFlow, there was no standardized way to directly manipulate the forwarding table, so SDNs were either completely proprietary or functionally handicapped.

What Are The Most Important Business Goals Delivered Through Virtualization?

2011

71%

52

51

43

17

15

15

13

4

1

Data: InformationWeek Virtualization Management Survey of 396 business technology professionals in August 2011 and 203 in August 2010

What Will Cisco Do?

It's not hard to imagine why industry heavyweights would be wary of efforts to remove the brainpower from their devices and put it on centralized controllers. That Cisco and other networking vendors enjoy fat profit margins on routers and switches has everything to do with their providing the network hardware, software, and--more often than not--management tools. A fair analogy is the margins earned by Oracle/Sun, IBM, and Hewlett-Packard on their proprietary Unix systems vs. the margins on their x86 servers. In the x86 world, Intel, Microsoft, and VMware, not hardware makers, earn the fat margins.

OpenFlow has received enthusiastic support from enterprise-centric networking vendors such as Extreme and NEC, with NEC and startup BigSwitch the first out of the gate with controllers. Both Juniper and Cisco are participating in the Open Network Foundation but have yet to announce products supporting the standard. Brocade is an enthusiastic supporter but views telecom carrier networks and very large Internet businesses as the most likely first customers.

At some point, though, the heavyweights may have no choice but to offer OpenFlow-based enterprise products, too. Broadcom and Marvell, which make the chips for many switches, both support OpenFlow. So whether Cisco likes it or not, enterprise customers will have the option of buying affordable, quality products that support the standard.

OpenFlow doesn't necessarily doom leaders like Cisco. If Broadcom and Marvell become the Intel and AMD for the enterprise switching market, Cisco can recast itself as Microsoft or VMware. No one understands the complex issues of managing traffic like Cisco does, and if it were to position itself as the premier maker of network controllers in an OpenFlow world, its customers would gladly grant it that status. Cisco won't let another company assume that role. If it doesn't embrace OpenFlow, it'll at least try to offer a proprietary equivalent.

However the move to software- defined networks plays out, Cisco in particular has a strong hand to play. Its already tight relationship with VMware and its leadership positions in both storage networking and data networking will make Cisco hard to beat.

The transition to software-defined networks will happen, but predicting the timing is much trickier. There's no backward compatibility for OpenFlow or any other SDN scheme. Fundamental hardware changes are required to make SDNs perform at high speeds. Cisco paints itself as cautiously enthusiastic about OpenFlow. Indeed, if it can see a way to remain the preferred switch vendor with healthy margins, all while supporting OpenFlow, Cisco may see the technology as the magic bullet that forces a networking hardware refresh faster than the current five- to eight-year cycle. Meanwhile, other relationships are moving ahead. NEC, for example, is developing an OpenFlow virtual switch for use with Windows Server 8 and Microsoft's Hyper-V server virtualization software.

The final and most unpredictable variable in this equation are network management teams themselves. After being well served for 30 years by Ethernet and TCP/IP's fundamental protocols, they'll move very cautiously to software-defined networks and centralized controller schemes.

In some environments, where massive data centers and highly parallel throughput server farms are the norm, the transformation can't happen fast enough. Google, Microsoft, Facebook, and Yahoo are all members of the Open Network Foundation driving OpenFlow. For those with more pedestrian setups, getting comfortable with an SDN will take some time.

All Articles In This Cover Story:

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights