Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Biggest Thing Since Ethernet: OpenFlow

The combination of Ethernet and TCP/IP is so powerful and so fundamental to the way we craft data center systems that it's almost heresy to even suggest we move beyond those protocols. In the 1990s, Gartner famously predicted that Token Ring would supplant Ethernet by the end of that decade, and Novell created its own networking protocol, as did Apple, rather than take on what they saw to be the flaws of the overly complicated TCP/IP. And yet here we are today: Token Ring is a relic, IPX and AppleTalk are footnotes in the storied past of multiprotocol routers, and Ethernet and TCP/IP are the dominant networking technologies.

While no one in their right mind suggests completely replacing Ethernet and TCP/IP, anyone who's struggled to automate data center load management in today's virtualized data centers knows that current networking protocols present a challenge. For companies to make the most efficient use of their virtualized servers, they must move workloads around their data centers, but doing so implies moving network connectivity along with performance assurances, security, and monitoring requirements. Today, that's either impossible to do automatically, or the method for doing it is highly proprietary. And virtualization isn't the only challenge--as businesses add more applications to their networks, they need to address the unique needs of those apps at a policy level.

Quite simply: Networking must change if it's going to keep up with what businesses want to accomplish. Imagine networks that support both lots of live streaming video as well as financial and healthcare transactions at the core. For video, if a network gets congested, the thing to do is drop frames at the source. There's no point in delivering voice or video data late. Meanwhile, the network never should drop packets of financial data. A smarter high-level policy might be to define separate paths through the network for the two different types of data. In regulated industries, network designers may want to set policies that make it impossible for certain types of data to hit various parts of the network, or ensure that security appliances always look at some flows of sensitive data. Simultaneously, and possibly separately, IT architects will want to create policies to ensure that certain essential services are highly available and protected with a disaster recovery plan.

While it was possible to set up environments that support some of these policies when applications and services were tightly coupled with their servers, virtualization makes such a static configuration hopelessly outdated. Loads change and servers fail--and virtualization lets you deal with all that, but only if the network can respond to a layered set of policies that must be observed in a highly dynamic environment. Network configurations--just like virtual servers--must reconfigure themselves in the blink of an eye, and to do that, bridging and routing protocols must evolve.

So far, they haven't. Network engineers are still versed in the command line interfaces of the switches they run. Policies involve writing router rules and setting access control lists, usually by crafting them in proprietary formats, and then using scripts to apply those rules to devices across the network. Even where better tools exist, network designers can set the quality of service, VLANs, and other parameters, but the Layer 2 switching rules are set by Ethernet's Spanning Tree protocol and the routing rules are dictated by TCP/IP. There's little ability to override those mechanisms based on business rules.

At a conceptual level, the answer has been dubbed "software-defined networking," or SDN--letting network engineers specify configurations in high-level languages, which are then compiled into low-level instructions that tell routers and switches how to handle traffic. The idea is to give engineers more complete access to the lowest-level functions of networking gear so that they, and not TCP/IP or Spanning Tree, dictate how network traffic should move.

At the same time, engineers would work in a higher-level language to more easily describe complex constructs implemented as simple rules on a router or switch. It would be a lot like how a programmer writes in C++ or Visual Basic, and the commands are then compiled into the machine language of the processor.

How important are virtualization technologies to your company's overall IT strategy?
  • 1