OpenFlow Vs. Fabrics
October 08, 2013
Network vendors have been looking for ways to get around the inherent limitations of Spanning Tree Protocol and the traditional three-tier network design of core, distribution and access layers. One option is a fabric, which can increase speed and create short cuts for data flows. However, fabrics are being challenged by the rise of OpenFlow and software-defined networking.
First, let's review fabrics. Fabrics such as Juniper's QFabric and Brocade's VCS are designed to replace the leaf-spine architecture in a data center. While the physical topology of a fabric resembles a leaf-spine, the interconnects act more as aggregation points rather than intelligent switches. They serve mostly to connect the end-node leafs together rather than make packet-forwarding decisions. Rather than program spine nodes as high-speed switches designed to facilitate east-west traffic, fabric topologies use interconnects to attach all the leaf-node switches together in a virtual mesh topology.
- Datacenter Modernization: How Customers are Standardizing in Preparation for the Future
- Inside Threats: Is Your Company at Risk?
- Seriously Organized Crime: Tackling Cyber Enabled Financial Fraud
- Boost IT Visibility and Value with Service Catalog
To a server node attached to a leaf switch, every destination appears to be one hop away. Packets can be sent over multiple paths inside the fabric to increase effective bandwidth. Fabrics are resilient when it comes to rerouting in the event an interconnect goes down. Fabrics shine when traffic flows can be split up and pushed across the backbone.
Fabrics are less useful is when all that traffic is "hidden" in a tunnel, such as those created in NSX. NSX uses vWires, a virtual wire construct to encapsulate packets for transport between two hosts. Tunnels take a path through the network and force all their traffic down that path. There's no way to see what's going through a given tunnel, at least from the network's perspective. To a switch, a tunnel looks like an obscured traffic flow.
[VMware NSX garnered a lot of attention at its launch, but Greg Ferro explains why it's too soon to crown it king of the data center in "VMware NSX Caution Signs."]
Fabrics need visibility into the traffic as it lands on the leaf-node switches. That's why Brocade, Juniper, and others have worked with VMware to include an NSX decapsulation agent on the edge of the fabric. If the vWire can be opened up, the fabric mechanisms for accelerating the data flows will work on the data contained within. When the traffic exits the other side of the fabric, the NSX agent can reassemble it back into a vWire tunnel. The fabric gets to work its magic and NSX never knows the difference.
So how does this compare to OpenFlow?
OpenFlow provides many of the same advantages of a traditional networking fabric. It allows for path determination. You can even take it a step further by slicing the network and forcing traffic to take paths to ensure security separation or low cost link utilization. OpenFlow gives you more granular control over your traffic flows. OpenFlow can deal with an NSX vWire much better than a traditional fabric due to the flow nature of NSX.
That's because OpenFlow doesn't merely load every packet into a tunnel construct and launch the whole thing into the network. OpenFlow uses hop-by-hop packet processing to determine the most optimal route at every stop. It functions very similarly to traditional packet forwarding, only with the advantage of having a software controller sitting above the data plane.
That controller can "see" traffic patterns in the network and adjust accordingly at each individual hop. If a link goes down between two data centers, OpenFlow can adjust dynamically to route traffic around failure. And with the ability to see all flows, it can intelligently adjust and split those flows so as to not overwhelm a backup link when failure occurs.
Where OpenFlow differs is the level of involvement that is required from the network team. Traditional fabrics have very little tuning ability. You set everything up and let the fabric director software decide how best to manage the traffic flows.
By contrast, OpenFlow has much greater reach but requires a bit more manual tinkering. Network admins can drill down to the individual flow level but run the risk of losing sight of the bigger picture. While a given flow can receive preferential treatment, it might lead to other flows being forced down suboptimal paths or getting lost in a sea of path reroutes based on complex policies.
Fabrics need visibility into the packets. Breaking the flows out of a vWire allows the fabric to work dark magic to make everything faster. OpenFlow gives you additional control over the vWire flows at the cost of requiring knowledge of that dark magic to reap the same kind of rewards.
By decapsulating the traffic from the vWire tunnel, you gain much deeper visibility into the contents of the payload. Have voice traffic that is delay sensitive? Forward across a low-latency, higher-cost link. Is it bulk FTP or backup traffic? Send it down the low-cost links that have higher delay. Without extracting the payload from the tunnel, you can only forward every packet as fast as possible without regard for the most optimal solution, because in a vWire, every packet looks the same.
At the end of the day, I think there's a place for both models. Fabrics can exist in the data center to replace leaf-spine topologies and accelerate east-west traffic. OpenFlow can be used across the entire network to steer flows toward the fabric as well as ensure that flows traverse links to utilize things like load balancers or firewalls.
Fabrics represent the height of traditional networking hardware optimization in the data center, but if the future is truly in software, I think we'll see OpenFlow pull away sooner rather than later.
Do you use a network fabric? Or are you an OpenFlow shop? Will you be implementing NSX across both? Or neither? Let me know in the comments section below.
Tom Hollingsworth, CCIE #29213, is a former VAR network engineer with 10 years of experience working with primary education and the problems they face implementing technology solutions. He has worked with wireless, storage, and server virtualization in addition to routing and switching. Recently, Tom has switched careers to focus on technology blogging and social media outreach as a part of Gestalt IT media. Tom has a regular blog at http://networkingnerd.net and can be heard on various industry podcasts pontificating about the role technology will play in the future.