Putting The Catalyst 6500 In Its Place

The 6500 is a workhorse in the core and the data center. Network admins love to hate it and hate to love it. The variety of blades that can be inserted for in switch processing makes the platform applicable to many different network requirements. It's also an old platform and is getting pretty long in the tooth. With a paltry 80Gbps (full duplex) interconnect to the back plane, the 6500 simply isn't suited to stand as a next generation core data center switch. It's time to start thinking about p

Mike Fratto

July 12, 2010

8 Min Read
Network Computing logo

The 6500 is a workhorse in the core and the data center. Network admins love to hate it and hate to love it. The variety of blades that can be inserted for in switch processing makes the platform applicable to many different network requirements. It's also an old platform and is getting pretty long in the tooth. With a paltry 80Gbps (full duplex) interconnect to the back plane, the 6500 simply isn't suited to stand as a next generation core data center switch. It's time to start thinking about putting it out to data center/LAN edge or as a core campus switch. Fortunately, you will have options for replacing the data center network.

While I had this thought circulating in my head for awhile now, a blog Cisco's Rip and Replace Dilemma from Stuart Miniman at Wikibon, a follow-up by John furrier at Silicon Angle, and then a response  by Steve Schuchart with Current Analysis prompted me to chime in. They are pretty smart characters, but I think I might be able to add something to the discussion.

Any change you make to you data center is going to involve some disruptive replacement. Cisco is going to try to move you off the aging Catalyst platform that really can't support the coming capacities that data centers will demand. HP hasn't had a data center switch until they acquired 3Com and H3C, so whatever you're using from them will almost certainly require a replacement. Juniper doesn't have the market adoption yet, so if you go with Juniper, it will be  a change. Brocade, Extreme, and Force10 have had high capacity switches for a while and if you already have them, you will likely get some life out of them for a few more years even moving to 10Gb Ethernet.

The position that Cisco is in is that they are disrupting their own users, and that provides an opportunity for you to replace your incumbent network vendor. whoever they may be. Look, learning a new switch line isn't that hard. For example, going from IOS on the Catalyst to NX-OS is probably as big a change is it is to learn another switch line from another vendor. Frankly, having managed and configured switches from most of the vendors out there, I can tell you learning a new switch line isn't that hard. There is a learning curve, but it's pretty short, and depending on the CLI, it can be pretty shallow. Anytime you make a big change that requires a large capital investment, it's an opportunity to look at what there is in the market and see if there is a better fit for you.

That should have your incumbent network equipment vendor nervous, and Cisco is nervous. Their nervousness (and by "they" I mean the Cisco Corporate Entity) isn't making them fearful and fretful about losing your business. Just the opposite, their nervousness is driving them to bring out new products and new technologies far ahead of their competition to tell you, their customer, that they are leaders in the field far ahead of the pack and that an investment in the Nexus line will be long lived.It's why they rolled out FabricPath before TRILL is ratified by the IETF; they want to show that they are leading the industry. While Cisco tried to be clear that once TRILL was ratified, they would support it on the Nexus, the message came across is that users would have to choose between FabricPath or TRILL. That caused much commotion about Cisco trying to force proprietary protocols to induce lock-in. All vendors want to be your single-source for whatever they do. Cisco is no better or worse in that regard, as Schuchart points out, and I agree. But in the case of FabricPath, the plan is that TRILL will be supported on the NX-OS and you will be able to run FabricPath and TRILL concurrently and will be able to interoperate with Cisco Nexus and non-Cisco switches that support TRILL forming one big happy fabric.

That aligns with Chambers's statement during a press meeting I attended at CiscoLive that "HP is going to be in the data center for a long time. Cisco is going to be in the data center for a long time. Our products will work together, but you will get value add if you go with Cisco [for computing and networking]. It's a major tactical mistake to make customers choose between vendors [all or nothing]." Pretty savvy thinking. A part of pie is better than no pie and a piece of pie means Cisco still has an "in" in the data center. Yes, Cisco wants to sell you servers and networking, but just networking will do. They can keep talking about expanding their products into your data center that way, and Cisco is not unique in this approach.

There is plenty of worthy competition out there, however.  Look at the landscape.  Brocade has data center switching and storage switching (including CNAs and HBAs) and they can own the network end-to-end. That puts Brocade in a position of delivering an end-to-end data center networking line that is tightly integrated. HP OEMs their SAN switches and CNA/HBAs, but their integration points to the server-side of the equation, as well as the partnerships in SAN switching from Brocade and Cisco. This means HP has nice potential for providing robust data center solutions.

A lot is dependent on integration between H3C products and technologies and HP's server lines. Cisco, with UCS, offers a fully integrated data center package from server to SAN. While Juniper is the belle of the ball, they have to rely on partnerships with Dell and IBM to sell their equipment as part of an integrated data center package and they probably have to compete with Brocade and Cisco in those deals. Even so, Juniper only offers the data side. Others like Extreme and Force10 have less in the way of partnerships.

The next data center network you build is going to look awfully different than what you have today. The problem with a traditional three-tier architecture, as Miniman points out, is that you eventually run out of bandwidth at the uplinks even though available capacity is wasted, and you can't make good use of the network connecting virtual servers and storage. Multi-pathing is going to make the single-tree network go away. Greg Ferro has a concise discussion on the capacity issue and what multi-pathing does to alleviate it. Essentially, a flattened layer two topology that you get with multi-path shifts north-south (uplink/downlink) traffic to east-west  (cross connect) traffic, which reduces load north-south and increases load east-west. In addition, you can use the full capacity of the network.The increasing use of virtualization is raising the number of nodes per port, which in turn raises the utilization per port. Server densities are increasing as with more compute power per rack unit increases. The potential to converge storage and data networking onto Ethernet and the increased use of iSCSI and file based storage over 10Gb pipes is putting much more traffic on Ethernet. Utilization goes up, so goes the need for more capacity. But the Next Gen data center network isn't just about speeds and feeds. It's also about agility.

Virtualization gives the ability to move workloads around a data center, add processing power to existing workloads much more quickly and perform much faster back-up and recovery of systems lower recovery point objectives. That comes at a price, and at some point, you have to pay the network. You will need bigger speeds and feeds, for sure, but you also need to think less about where you have spare network capacity and more about where you have process power. In other words, if you want to automate moves, adds and changes, you want to know that you will have adequate capacity wherever a workload lands. Multi-pathing becomes your friend that, among other things, has the potential to shift traffic load from north-south in a three-tier design to more east-west in a flattened design. Between agility and increased bandwidth, multi-pathing is destined for 10Gb links.

The 6500 is an aging switch line that can't support the increased demands, so trying to extend the life of the 6500 as a data center switch just doesn't make much sense from a product development standpoint. I know, you aren't going to go and make a radical change tomorrow. You might still have that 6500 for several years. In fact, at CiscoLive, John McCool Senior Vice President/General Manager, Data Center, Switching and Services Group, said that Cisco sees a market of some 70 million Gb Ethernet ports in the data center in the next few years because 10Gb is still not widely deployed on server motherboards. What will likely happen, he said, with the 6500, is that as port densities increase and more servers with 10Gb LAN on motherboards ship, the 6500 will move to a data center edge, WAN edge or campus router/switch role where there is less demand for high capacity port densities. Sounds like a nice retirement plan to me.

Here's how to make the most of your network. Read Next-Gen LAN: Everything Except the Kitchen Sink. (registration required) 

About the Author(s)

Mike Fratto

Former Network Computing Editor

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights