Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Why I Like Juniper's QFabric (And A Mea Culpa)

While I was visiting Juniper in early December, I got a chance to sit down with the QFabric folks to discuss some of issues with QFabric and what I saw as a proprietary—with all the badness that word implies—product set in search of a reason. While QFabric is proprietary because of how the components are interconnected, I came away with the impression that the overall design and capacity looks extremely powerful, and I think the upsides of the QFabric product set far outweigh the downsides. Give a month's time between visiting Juniper and now, I'd say that all my ballyhoo about being proprietary was a non-issue. My bad.

Juniper's QFabric, in a nutshell, distributes the traditional chassis switch into discrete components. The top-of-rack (ToR) switches, called QFNodes, are line cards. The QFinterconnect, which the QFNodes are connected to via OM-4 or OM-5 fiber, is the back plane, and the QFdirector(s) are the supervisors (in Cisco parlance), or managers. Each QF node is connected to between two and four QFInterconnects via 40-Gbit links, and there are two QFDirectors that are connected to QFNodes and interconnect via an out-of-band 1-Gbit link.

Greg Ferro, who does network design and consultation for large organizations and also contributes to Network Computing, has written a nice explanation of QFabric and explains some benefits.

Here's why I like it. It's operationally simple. The distributed chassis metaphor is apt and means that multi-switch management is greatly simplified. You can manage up to 128 switches as if they were a single switch, which for all intents and purposes, they are. Think about that for a moment. You don't have to maintain credentials across 128 switches or authentication configuration if you are using RADIUS or some other authentication server.

You don't have to integrate 128 devices into your network management system (NMS), hypervisor management system or other IT systems. Even with scripting or an NMS, making sweeping changes to 128 individual switches in a network is dicey. Granted, you can aggregate multidevice management to simulate a single pane of glass, but that means introducing more servers and management protocols that can get in the way or breakdown. As the number of things you need to manage grows, the simpler your management framework needs to be.

Traffic-wise, you don't have to worry about multiple paths, spanning tree, building N-tiers, or deciding where to set-up routing since QFabric also routes (although Juniper is quick to point out that you likely wouldn't replace your edge or core router with a QFabric, just like you wouldn't replace them with a 1U ToR L2/L3 switch). Any two points in the QFabric is a mere 5 microseconds away. Unless your company requires ultra low latency, anything below 1 millisecond (typically, the granularity that latency is measured and reported in enterprise switches) is probably fine. But, hey, less is better in any case. If you need more capacity at the edge, you can add additional switches fairly cost effectively, as Ferro points out.

Bear in mind that, currently, each QFNode 3500 can be oversubscribed at 3 to 1, based on 48 10-Gbit ports facing the access devices and 4 by 40 gigabit uplink ports facing the QFInterconnects. 480 Gbits inbound going into a 160-Gbit uplink makes 3-to-1. However, engineers at Juniper said the limitation today is the interface speed of the uplink ports. There is no limitation to the QFInterconnect, so speeds can increase in the future provided Juniper ships QFInterconnect cards and QFNodes that support higher capacities.

  • 1