QFabric, as Abner Germanow, Juniper's director of enterprise marketing, puts it, takes a chassis-based switch, where you have line cards that handle access ports and forwarding, a supervisor or management engine that provides the smarts, and a backplane that all this stuff plugs into, and distributes the chassis functions across the data center. Rather than trying to home run cables from servers to a central switch, top-of-rack (ToR) switches connect to servers, which are cabled up to a central backplane. To the rest of the network, a QFabric looks like a single switch and, in that regard, is similar to Brocade's Virtual Chassis Switch and Cisco's Nexus 7K/5K/6100 + Nexus 2K extenders. QFabric will be integrated and managed by Juniper's space management platform.
Juniper's architecture is an interesting move for the company. The Nexus platform is more of a commitment to Cisco's networking vision because the Nexus Fabric Extenders are driven by an upstream Nexus switch or Unified Computing System (UCS) Interconnects if you have UCS. That might give some IT folks pause after going through the effort and disruption to deploy the Nexus 2Ks because you can't easily switch to a new ToR platform. Committing to Nexus is a bigger bite.
The QFX switches can be part of a QFabric or they can be stand-alone ToR switches, which is a departure from Cisco's approach. Of course, Juniper knows that its market share is very small compared with either Cisco's or HP's, so Juniper's approach is to make sure it can fit neatly into any data center and gain a toehold and then expand. Like any big switch architecture, such as Brocade's VCS, Cisco's Nexus and now Juniper's QFabric, you gain the most benefit if you go all in with one vendor. However, having a strategy, like Juniper does, of being able to get in the door lowers a potential barrier to entry. So what is Juniper cooking?
QFabric is made up from three components. The first component is the QF Node, which is a ToR switch. The QFX 3500 switch, which is a 48-port L2/3 10Gbit switch that supports 48 1/10Gbit Ethernet or up to 12 2/4/8 Gbit Fibre Channel. The QFNode can handle 1.28Tbit of switching, which is enough for 48 10Gbit ports plus 480 Gbit uplink to the interconnect. That includes a Fibre Channel Forwarder (FCF) for direct connection to a Fibre Channel fabric. Juniper admitted it was OEM'ing some, or all, of the FCF features but wouldn't say from whom. The Interconnect , which will be shipping in Q3, is the back plane. You can have up to four interconnects, and they can support up to 128 QF Nodes for a total of 6,144 managed 10Gbit ports. The QFNodes and Interconnects are cabled up using fiber, and some for-now-secret sauce, to 480-plus Gbit traffic. The QFNode supports Fibre Channel over Ethernet, though today, the product doesn't support 802.1Qau Congestion Notification.