Tom Hollingsworth


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

OpenFlow Vs. Fabrics

Network vendors have been looking for ways to get around the inherent limitations of Spanning Tree Protocol and the traditional three-tier network design of core, distribution and access layers. One option is a fabric, which can increase speed and create short cuts for data flows. However, fabrics are being challenged by the rise of OpenFlow and software-defined networking.

First, let's review fabrics. Fabrics such as Juniper's QFabric and Brocade's VCS are designed to replace the leaf-spine architecture in a data center. While the physical topology of a fabric resembles a leaf-spine, the interconnects act more as aggregation points rather than intelligent switches. They serve mostly to connect the end-node leafs together rather than make packet-forwarding decisions. Rather than program spine nodes as high-speed switches designed to facilitate east-west traffic, fabric topologies use interconnects to attach all the leaf-node switches together in a virtual mesh topology.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

To a server node attached to a leaf switch, every destination appears to be one hop away. Packets can be sent over multiple paths inside the fabric to increase effective bandwidth. Fabrics are resilient when it comes to rerouting in the event an interconnect goes down. Fabrics shine when traffic flows can be split up and pushed across the backbone.

Fabrics are less useful is when all that traffic is "hidden" in a tunnel, such as those created in NSX. NSX uses vWires, a virtual wire construct to encapsulate packets for transport between two hosts. Tunnels take a path through the network and force all their traffic down that path. There's no way to see what's going through a given tunnel, at least from the network's perspective. To a switch, a tunnel looks like an obscured traffic flow.

[VMware NSX garnered a lot of attention at its launch, but Greg Ferro explains why it's too soon to crown it king of the data center in "VMware NSX Caution Signs."]

Fabrics need visibility into the traffic as it lands on the leaf-node switches. That's why Brocade, Juniper, and others have worked with VMware to include an NSX decapsulation agent on the edge of the fabric. If the vWire can be opened up, the fabric mechanisms for accelerating the data flows will work on the data contained within. When the traffic exits the other side of the fabric, the NSX agent can reassemble it back into a vWire tunnel. The fabric gets to work its magic and NSX never knows the difference.

So how does this compare to OpenFlow?

OpenFlow provides many of the same advantages of a traditional networking fabric. It allows for path determination. You can even take it a step further by slicing the network and forcing traffic to take paths to ensure security separation or low cost link utilization. OpenFlow gives you more granular control over your traffic flows. OpenFlow can deal with an NSX vWire much better than a traditional fabric due to the flow nature of NSX.

That's because OpenFlow doesn't merely load every packet into a tunnel construct and launch the whole thing into the network. OpenFlow uses hop-by-hop packet processing to determine the most optimal route at every stop. It functions very similarly to traditional packet forwarding, only with the advantage of having a software controller sitting above the data plane.

That controller can "see" traffic patterns in the network and adjust accordingly at each individual hop. If a link goes down between two data centers, OpenFlow can adjust dynamically to route traffic around failure. And with the ability to see all flows, it can intelligently adjust and split those flows so as to not overwhelm a backup link when failure occurs.

Where OpenFlow differs is the level of involvement that is required from the network team. Traditional fabrics have very little tuning ability. You set everything up and let the fabric director software decide how best to manage the traffic flows.

By contrast, OpenFlow has much greater reach but requires a bit more manual tinkering. Network admins can drill down to the individual flow level but run the risk of losing sight of the bigger picture. While a given flow can receive preferential treatment, it might lead to other flows being forced down suboptimal paths or getting lost in a sea of path reroutes based on complex policies.

Fabrics need visibility into the packets. Breaking the flows out of a vWire allows the fabric to work dark magic to make everything faster. OpenFlow gives you additional control over the vWire flows at the cost of requiring knowledge of that dark magic to reap the same kind of rewards.

By decapsulating the traffic from the vWire tunnel, you gain much deeper visibility into the contents of the payload. Have voice traffic that is delay sensitive? Forward across a low-latency, higher-cost link. Is it bulk FTP or backup traffic? Send it down the low-cost links that have higher delay. Without extracting the payload from the tunnel, you can only forward every packet as fast as possible without regard for the most optimal solution, because in a vWire, every packet looks the same.

At the end of the day, I think there's a place for both models. Fabrics can exist in the data center to replace leaf-spine topologies and accelerate east-west traffic. OpenFlow can be used across the entire network to steer flows toward the fabric as well as ensure that flows traverse links to utilize things like load balancers or firewalls.

Fabrics represent the height of traditional networking hardware optimization in the data center, but if the future is truly in software, I think we'll see OpenFlow pull away sooner rather than later.

Do you use a network fabric? Or are you an OpenFlow shop? Will you be implementing NSX across both? Or neither? Let me know in the comments section below.

Tom Hollingsworth, CCIE #29213, is a former VAR network engineer with 10 years of experience working with primary education and the problems they face implementing technology solutions. He has worked with wireless, storage, and server virtualization in addition to routing and switching. Recently, Tom has switched careers to focus on technology blogging and social media outreach as a part of Gestalt IT media. Tom has a regular blog at http://networkingnerd.net and can be heard on various industry podcasts pontificating about the role technology will play in the future.


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers