12:10 PM
Connect Directly

OpenFlow Tutorial: Next-Gen Networking Has Much To Prove

The emerging protocol promises to make it easier to deal with virtualization while also using lower-cost switches.

Roots Of Network Virtualization

The separation of the control plane from the forwarding plane isn't new with OpenFlow. It has been used in the design of high-end routers since the mid-1990s and telephony switches long before that. The initial motivation was to protect each switch element from a degradation in another: A very busy route processor doesn't hurt forwarding performance, and peaks in network load don't pull processing resources away from the control plane. Significantly, that separation--first within a single chassis, and more recently with a single processor controlling multiple switching fabrics--provided an environment for the development of high-availability features.

Multiprotocol label switching (MPLS), another key technology in modern networks, also has features that relate to this trend, since it builds an "intelligent" access layer around a relatively dumb but high-performance network core. That structure enables the creation of flexible, innovative services over a homogeneous backbone. MPLS also reflects a trend in networks similar to what we're seeing in server virtualization: A single physical network uses a logical component--MPLS--to allow the overlay of multiple virtualized network services.

What's new and different with OpenFlow is that, in theory, it could work with any type of commodity switch.

Operating in the gap between a centralized control plane and a distributed forwarding plane, OpenFlow defines a protocol that lets a controller use a common set of instructions to add, modify, or delete entries in a switch's forwarding table. An instruction set might not sound like a technology breakthrough, but it's all about what people do with that instruction set, says Kyle Forster, co-founder of Big Switch Networks, a startup building networking products based on the OpenFlow standard. If you read the x86 server instruction set, it "isn't obvious that you could build Linux, Microsoft Word, or the Apache Web Server on top," Forster says. "I think OpenFlow is the same way. It isn't about the basics. It's all about the layers and layers of software built on top. That is where the benefits are going to be felt."

Origins Of OpenFlow

OpenFlow began at Stanford University, under professors Nick McKeown, Martin Casado, and others. Their goal was to create an environment in which researchers could experiment with new network protocols while using the campus's production network. It let researchers try out those protocols in a realistic environment without the expense of building a test network or the risk of blowing up the campus network and disrupting production traffic.

The first consideration for using OpenFlow outside of academia was to scale bandwidth in massive data centers. Forster calls this the "million-MAC-address Hadoop/MapReduce" problem. For uses such as Google's search engine, parallel processing of massive data sets takes place across clusters of tens or hundreds of thousands of servers. For such big data applications, "it doesn't take much back-of-the-envelope calculating to come to the conclusion that a tree-based architecture will require throughput on core switches/routers that simply can't be bought at any price right now," Forster says.

Interest in OpenFlow has since expanded to cloud computing and virtualized services companies. Earlier this year, six companies--Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo--formed the Open Networking Foundation (ONF) to support development of SDN and, by extension, the OpenFlow protocol. ONF now has more than 40 members, mostly vendors, which pay $30,000 a year to fund development of the standard.

The version of the OpenFlow specification in most widespread use is version 1.0.0, which in December 2009 defined three components: controller, secure channel, and flow table.

The newest version of the spec, 1.1.0, released in February, adds two more components: group table and pipeline (see diagram, right).

The controller is of course the control plane, and it provides IT programmability to the forwarding plane. The controller manages the forwarding plane by adding, changing, or deleting entries in the switches' flow tables.

The controller manages the switches across a secure channel. "Secure channel" is in fact slightly misnamed, since it doesn't provide any security on its own. While the channel is normally authenticated and encrypted using Transport Layer Security, it can be simple unsecured TCP. Implementing an SDN architecture using unsecured communication channels is utter folly, of course, but it can be done. Also, messages across the secure channel are reliable and ordered, but aren't acknowledged and therefore aren't guaranteed.

How do you manage small projects that are not part of PMI methodologies?

2 of 3
Comment  | 
Print  | 
More Insights
White Papers
Register for Network Computing Newsletters
Current Issue
2014 Private Cloud Survey
2014 Private Cloud Survey
Respondents are on a roll: 53% brought their private clouds from concept to production in less than one year, and 60% ­extend their clouds across multiple datacenters. But expertise is scarce, with 51% saying acquiring skilled employees is a roadblock.
Twitter Feed