Juniper Gets To '1' With New QFabric Family

Juniper finally got to 1 in its 3-2-1 strategy launched last year. The trend, to flatten the network, has been coming for a while. What's held back the flattening are little things, like being able to aggregate enough access ports over a high capacity and low latency interconnects, broadcast control, traffic management and, frankly, a strong driver to re-architect the network. But that hasn't stopped Brocade, Cisco and Juniper from coming out with some innovative data center network offerings.

Mike Fratto

February 24, 2011

4 Min Read
Network Computing logo

Juniper finally got to 1 in its 3-2-1 strategy launched last year. The trend, to flatten the network, has been coming for a while. What's held back the flattening are little things, like being able to aggregate enough access ports over a high capacity and low latency interconnects, broadcast control, traffic management and, frankly, a strong driver to re-architect the network. But that hasn't stopped Brocade, Cisco and Juniper  from coming out with some innovative data center network offerings. Of the three, Juniper's seems to require less of a commitment by IT, which I think will help the company open doors.

QFabric, as Abner Germanow, Juniper's director of enterprise marketing, puts it, takes a chassis-based switch, where you have line cards that handle access ports and forwarding, a supervisor or management engine that provides the smarts, and a backplane that all this stuff plugs into, and distributes the chassis functions across the data center. Rather than trying to home run cables from servers to a central switch, top-of-rack (ToR) switches connect to servers, which are cabled up to a central backplane. To the rest of the network, a QFabric looks like a single switch and, in that regard, is similar to Brocade's Virtual Chassis Switch and Cisco's Nexus 7K/5K/6100 + Nexus 2K extenders. QFabric will be integrated and managed by Juniper's space management platform.

Juniper's architecture is an interesting move for the company. The Nexus platform is more of a commitment to Cisco's networking vision because the Nexus Fabric Extenders are driven by an upstream Nexus switch or Unified Computing System (UCS) Interconnects if you have UCS. That might give some IT folks pause after going through the effort and disruption to deploy the Nexus 2Ks because you can't easily switch to a new ToR platform. Committing to Nexus is a bigger bite.

The QFX switches can be part of a QFabric or they can be stand-alone ToR switches, which is a departure from Cisco's approach. Of course, Juniper knows that its market share is very small compared with either Cisco's or HP's, so Juniper's approach is to make sure it can fit neatly into any data center and gain a toehold and then expand. Like any big switch architecture, such as Brocade's VCS, Cisco's Nexus and now Juniper's QFabric, you gain the most benefit if you go all in with one vendor. However, having a strategy, like Juniper does, of being able to get in the door lowers a potential barrier to entry. So what is Juniper cooking?

QFabric is made up from three components. The first component is the QF Node, which is a ToR switch. The QFX 3500 switch, which is a 48-port L2/3 10Gbit switch that supports 48 1/10Gbit Ethernet or up to 12 2/4/8 Gbit Fibre Channel. The QFNode can handle 1.28Tbit of switching, which is enough for 48 10Gbit ports plus 480 Gbit uplink to the interconnect. That includes a Fibre Channel Forwarder (FCF) for direct connection to a Fibre Channel fabric. Juniper admitted it was OEM'ing some, or all, of the FCF features but wouldn't say from whom. The Interconnect , which will be shipping in Q3, is the back plane. You can have up to four interconnects, and they can support up to 128 QF Nodes for a total of 6,144 managed 10Gbit ports. The QFNodes and Interconnects are cabled up using fiber, and some for-now-secret sauce, to 480-plus Gbit traffic. The QFNode supports Fibre Channel over Ethernet, though today, the product doesn't support 802.1Qau Congestion Notification.The switching intelligence is contained in the QFNodes rather than in the Interconnect. The Interconnect only keeps track of ports and status. Architecturally, this is different from what Cisco is doing with the Nexus Fabric Extenders connecting to the Nexus 5000 family, where the switching smarts occurs in the Nexus 5K and the extenders are dumb. According to Germanow, the switching intelligence is distributed across the QFnodes and not replicated. In a traditional network, each node maintains a picture and routing/forwarding table to decide where to send packets and frames on the next hop. Each device is responsible for maintaining its view of the world and communicating that view to neighbors.

There could be several benefits from distributed intelligence. One of the benefits Juniper brings is better congestion management. The uplinks to the Interconnects are not Ethernet, so IEEE 802.1Qau Congestion Notification is not being used; rather, that is more of Juniper's secret sauce. Fabric topology changes--such as adding, removing or moving QFNodes' ToR switches and the movement of physical and virtual machines--might be replicated more quickly than using other L2/L3 mechanisms like route updates, TRILL and spanning tree.

Also, Juniper has visions of integrating more services into its fabric. Our friends at Brocade and Cisco want to enhance their fabric as well, but when we start talking about multiple terabits of capacity and microsecond latency, the engineering feats needed to scale services like load balancing, firewalling, etc. is staggering. Not to mention the effort required to do something much simpler like integrating the management components together.

One thing is for sure: Other network infrastructure vendors like Arista, Extreme, Force10 and HP have some catching up to do.

About the Author(s)

Mike Fratto

Former Network Computing Editor

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights