Is Hyperscale Possible In Networking?

Networking has long faced scaling limitations, but Brocade is trying to break down the barriers with its HyperEdge Architecture.

Tom Hollingsworth

December 2, 2013

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Hyperscale is everywhere today. Companies like Nutanix and Scale Computing are showing how hyperscale can apply to workloads to distribute them across distributed nodes to increase processing power and storage space. Adding additional capacity is easy. Purchase another node and install it into your system and you gain additional resources across the board. Doing this for storage or processing is easy. But can it be done in networking?

Brocade certainly thinks so. The company recently launched the latest addition to its HyperEdge campus networking portfolio. The new ICX 7750 Switch is designed to fill a role that Brocade sees in the campus network. The specs are impressive for sure: 1/10/40 GbE, OpenFlow 1.3 support, and support for many software conveniences like In-Service Software Upgrade (ISSU). That last point betrays where Brocade sees the 7750 fitting into the HyperEdge idea.

Brocade's HyperEdge Architecture is designed to work in a stack as a chassis replacement. The argument is sound: Chassis are monolithic and very dependent on scaling the backplane of the unit to bound performance.

A chassis can't scale past the rating of the interconnects between the line cards, which means it has to be overbuilt during design. Otherwise, you can't grow from 100 MbE ports all the way to 100 GbE in the same device. That means you're paying for more capacity than you can use at present. When it comes time to unlock that capacity, you must pay to upgrade a supervisor module or other director piece that determines the speed limit of the cards in the chassis. More often than not, you have to upgrade the cards as well.

The idea behind using stacks of fixed-configuration switches instead of chassis line cards does away with this problem. When you need faster ports, all you need to do is buy a switch with the proper configuration and insert it into the stack. The other stack members recognize the new device, install the correct software, and away your packets flow. Inserting switches into a stack shouldn't incur downtime, either. Fears of taking down hundreds of ports during a line card swap are reduced.

On the surface, you gain all the advantages of hyperscale computing. Capacity can be added when needed and you only pay for the capacity you need at any given time. But the real question is: Does this really scale?

Switch stacks are always going to have an upper bound. Whether that is due to the backplane interconnects or the ability of a switch stack manager to forward packets, the result is the same. Just like a physical chassis, you can only have a certain finite number of stack members. Compare that to a Nutanix system, which doesn't have an effective limit on the number of nodes. In essence, it can scale infinitely. Realistically, you'll likely run out of users to consume resources before you hit the actual limit.

[Read about Brocade's proposal for a network management layer for OpenStack in "Brocade Pitches OpenStack Management Layer."]

Hyperscale in a networking environment must take other things into consideration. The physical layer is one of the most important. Due to the 100m limitation on Ethernet cable runs, there are only so many cables that can be provisioned for a given space. Brocade says the ICX 7750 can easily stack 16 48-port switches together for 768 wiring closet ports, which may be enough for the geography of a given campus building or office section.

However, power requirements are also a concern. How am I going to power 16 switches in a wiring closet? The ICX 7750 draws just over 500 watts of power. With 16 switches in a closet, that means you're pulling 3000 watts just for the networking gear. I wonder how many campus wiring closets have that much dedicated power available?

Networks have historically been trying to find solutions to scaling limitations. Switches with more physical ports gave way to chassis units that could house multiple forwarding cards. Now, vendors such as Brocade are taking the lessons learned in the chassis and applying them to scale more efficiently with smaller, more intelligent units.

There are limitations no matter which way you go. The question is whether or not you want to keep plugging cards into a chassis whose hardware will eventually be eclipsed or if you want to take a chance on new technology that will improve over time to deliver the same performance with few of the downsides. Hyperscale has clear advantages if vendors can embrace it quickly enough.

About the Author

Tom Hollingsworth

nullTom Hollingsworth, CCIE #29213, is a former VAR network engineer with 10 years of experience working with primary education and the problems they face implementing technology solutions. He has worked with wireless, storage, and server virtualization in addition to routing and switching. Recently, Tom has switched careers to focus on technology blogging and social media outreach as a part of Gestalt IT media. Tom has a regular blog at http://networkingnerd.net and can be heard on various industry podcasts pontificating about the role technology will play in the future.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights