Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

OpenFlow Test Deployment Options

In a perfect world, software-defined networks could be deployed from the ground up, with new hardware, to avoid the need to support legacy architectures and/or protocols. However, that’s an unrealistic option. In most cases, enterprises will experiment by running hybrid networks using traditional switching and routing mechanisms alongside an SDN environment.

As network engineers digest the concepts of software-defined networking, the next step is to get their feet wet by playing with SDN technologies and protocols. One of the core protocols associated with SDN is OpenFlow, which programs hardware and virtual switches. This article will look at the use of OpenFlow because most networking vendors support OpenFlow firmware in beta or in production products today.

More Insights


More >>

White Papers

More >>


More >>

Early adopters of OpenFlow applications need blueprints for how to integrate OpenFlow hardware into existing networks. This allows for preliminary use case testing and adoption. There are a number of ways to integrate pockets of OpenFlow into native networks using path isolation. Note that just because there may be logical partitioning (VLANs/virtual contexts/logical-Switches) of the hardware forwarding tables, the OpenFlow flow table(s) and the native Ethernet forwarding engines are both sharing the same silicon.

OpenFlow is a flexible mechanism. Once traffic is ingested and classified, it can be forwarded on top of the native network using various encapsulations methods such as VXLAN, GRE, MPLS LSPs (Label Switch Paths); via VLANs; or just routed through the native Interior Gateway Protocol (IGP) paths.

SDN islands need to be integrated into the native network, whether to have a default drain from these islands into the native network or to stitch disparate SDN islands together. Most SDN products have some form of a gateway device to facilitate exits from the OpenFlow forwarding domain. Vendors also support OpenFlow pipeline interactions between the OpenFlow forwarding pipeline and the normal L2/L3 forwarding that exists today.

I’ll look at two integration strategies to bring OpenFlow into a native network. The first, SDN gateways, integrates OpenFlow traffic into the native network using the “normal” port as defined by the OpenFlow specification. The second integrates the OpenFlow forwarding pipeline into the normal Ethernet forwarding pipeline.

Two Example SDN Integrations

1. SDN gateway: The OpenFlow and native forwarding pipelines can be logically isolated from one another. Early vendor implementations are separated based on VLAN ID along with isolation through context and logical switches. It can be as simple as a routed interface on the native network that acts a default gateway to drain L3 lookups. The SDN gateways could be the same interface(s) that advertise the network prefix into the IGP and function as a default gateway. They can be used with protocols such as Virtual Router Redundancy Protocol (VRRP) for high availability.

2. Hybrid pipeline: Some vendors also support a blend of OpenFlow and native pipelines. OpenFlow “normal” is a reserved port in the specification that can be used as an output interface as a result of a match + action operation in the 1.0 version of OpenFlow.

OpenFlow Gateway Native Integration

The OFP_Normal configuration will send the packet in the OF pipeline to the native switching pipeline for packet forwarding. OFP_Normal is only used as a default forwarding mechanism in this case, very much like default routes in routing. The proactive rules placed in a higher priority in the following illustration would be matched prior to the normal L2/L3 pipeline. Priorities allow for application flow rules such as custom forwarding, security use cases, network taps, or any or function that can be performed using L1-L4 headers.

Proactive Rules

Both solutions have their pros and cons. OFP_Normal allows for a little bit more forwarding flexibility because the provider edge and L3 edge contain full visibility of the network’s topology, while using SDN gateways to separate the topologies over physical links.

SDN gateways have a much more traditional look and feel. They have a flat SDN OpenFlow network with a default gateway for client traffic by either controller proxy or direct reachability by the client host. Note that both approaches suffer from lack of adoption and maturity, which adds risk to any early production SDN.

Performance concerns are an issue with hybrid deployments. One reason is that TCAM operates at fairly slow speeds. Going from 1Gb to 10Gb line rates forces some vendors to do parallel TCAM lookups, thus reducing flow table rule capacity. Another performance concern is reactive flow policy, in which packets have to be sent to the controller for forwarding instructions, which can add latency. In a subsequent tutorial to be posted, I’ll show how flow rules can be preinstalled, eliminating the need for the OpenFlow switch to send a packet in to the controller for instructions.

SDN does not represent the end of networking as we know it. More than ever, talented operators, engineers and architects will be required to shape the future of networking. While OpenFlow is just one piece of the SDN puzzle, it is one of the few paths with momentum that may decouple monolithic network elements with a forwarding abstraction between OS and hardware. There’s considerable vendor support for OpenFlow and a variety of open source projects around OpenFlow-based controllers, which means now is a good time to start experimenting with the protocol and SDN.

Brent Salisbury, CCIE#11972, is a network architect at a state university. You can follow him on Twitter at @networkstatic.

Related Reading

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers