2011 Was An Awesome Year For Networking
December 28, 2011
After about eight or nine years of networking innovation stagnation, the number of new innovations starting in 2010 and exploding in 2011 is astounding. Speed and feeds are increasing, but the exciting work in 2011 occurred in new technologies to support initiatives like cloud computing, storage and data convergence, as well as migrating to IPv6. Here are the highlights.
Multipath Ethernet was all the rage in 2011. Protocols like Multichassis Link Aggregation (MLAG), Transparent Interconnection of Lots of Links (TRILL), Shortest Path Bridging (SPB) and proprietary protocols are all aimed at solving one of the thorniest issues in networking:getting rid of spanning tree and making use of all the interconnects between switches. The problem is that none of the multipath Ethernet product suites are standards-compatible. Part of the issue is that TRILL and SPB still aren’t fully ratified, so there isn’t a standard to conform to. But the other part is that early implementations of the current protocol drafts have gone far afield of what will likely be the final version. Brocade’s VCS uses only TRILL framing but not IS-IS, which is used by switches to form a coherent view of the network. Cisco’s FabricPath has taken TRILL and "enhanced" it to work better. Both Cisco and Brocade claim they will support standard TRILL after it is ratified.
Of course, the question has to be asked: Is multichassis link aggregation (MLAG) good enough? Unless you have an Internet-scale data center with tens of thousands of servers, you probably don’t have the port count, port density, nor strict SLAs that would require a partial or full mesh network that a TRILL-based network could provide. If all you need to do is to reduce the EoR/ToR switch to core oversubscription, then MLAG may be a workable choice. HP thinks that eschewing both TRILL and SPB in favor of MLAG is the way to go.
Juniper, for its part, went in a totally different direction with QFabric, by taking the chassis concept and distributing the components to a stand-alone director that acts as the brains of QFabric and ToR switches that connect to servers, as well as home-running back to a backplane chassis. It’s a bold move, and the proprietary approach is one that we have been critical of.
The question of whether multipath Ethernet standards will ever be implemented and, more importantly, whether various vendor products will interoperate is cloudy at best. Perhaps standards don’t matter and vendor choice does, because in all likelihood, if you are going to buy into a vendor’s fabric, you’re going all in.
All-in with OpenFlow
Software Defined Networking (SDN), which allows applications and stuff other than traditional network management systems to manipulate the network, builds on multipath Ethernet, converged networking and orchestration, have primarily been used to build private clouds in your own data center. The darling, of course, is OpenFlow, a protocol designed for controller-based flow management. The hyperbole around OpenFlow has been thick with claims that it will commoditize switching, make networks faster and more reliable, and treat male pattern baldness. The first two claims are just outrageous.
There is value in OpenFlow, and the promise of a programmable network that is both dynamic and robust is powerful, but let’s remember that Openflow made its commercial debut in 2011 with NEC and Fujisu announcing switch platforms at Interop 2011 and BigSwitch announcing a controller. The InteropNet Labs Openflow demonstration showed the tip of the iceberg of what can be accomplished with OpenFlow-based networking, but we have yet to see anything unique or innovative. That’s coming.