Life in the Really Fast Lane

10 Gigabit Ethernet switches from Extreme and Foundry can move a whopping 8 billion bits per second--for a price.

January 20, 2003

13 Min Read
Network Computing logo

Finally, after much preparation, we sent invitations to Alcatel, Cisco Systems, Enterasys Networks, Extreme Networks, Force10 Networks, Foundry Networks, Hewlett-Packard Co. and Nortel Networks. When push came to shove, though, only Extreme and Foundry rose to the challenge. Alcatel did not have product, while Cisco, Force10, HP and Nortel simply declined to participate. Enterasys offers a standalone 10-gigabit Layer 2 switch that did not meet the requirements of our tests, even though it reportedly had been updated with an 802.3ae-compatible interface.

Extreme's BlackDiamond and Foundry's BigIron proved that there are at least two companies that offer serious 802.3ae-based connectivity--and are ready to put their switches where their advertising claims are. Not only did the BlackDiamond and BigIron boxes interoperate with the standard and each other, they maintained impressive performance levels even when subjected to tests their competitors ducked. And they offer a wide range of features--from high availability to support for all major routing protocols to multicast routing and QoS (Quality of Service). We were impressed.

Unlike that of Fast Ethernet and Gigabit Ethernet, true 10 Gigabit Ethernet performance isn't a given. We're not yet at the point where chips providing Layer 2 and Layer 3 forwarding at wire speed are readily available, and you must factor in features, such as access lists and QoS, that require even more processing power for every packet sent. And beaucoup packets are being sent--wire speed for 10 Gigabit Ethernet makes it possible to forward 30 million per second on each interface versus 3 million for gigabit.

Of course, these hurdles haven't stopped many vendors from shipping 10-gigabit products; we can only speculate about why it stopped them from participating in our tests.In spite of the challenges presented by 10 Gigabit Ethernet, Extreme and Foundry hit the ground running. But because architectural constraints limit each card to an 8-gigabit connection to the backplane, bandwidth was limited to 8 gigabits in each direction between the ports and backplanes. So, instead of having 10 times the bandwidth of Gigabit Ethernet, you'll see an eightfold increase. This is a little disappointing, but both vendors were very open about it, reasoning that eight times 1 gigabit is still a dramatic improvement for anyone who has exceeded the capacity of Gigabit Ethernet. We found it hard to argue with that line of reasoning, especially considering that the full 8 gigabits of bandwidth held up so well under our the worst-case scenarios in our tests. Chances are, most of you don't need 10 Gigabit Ethernet, but if you've read this far you're probably considering taking the plunge or you're an aficionado of the biggest, latest and greatest. If your applications are mostly text-based, like databases and Web, however, your gigabit backbone is handling the load just fine.

Of course, when performance degrades, most users' immediate reaction is to blame the network. It's your job to be rational. If you think your backbone connections are the bottleneck, look at utilization. This isn't hard to do, and there's no excuse not to make the effort periodically. Even free software like MRTG SNMP can pull the octets-in-use data from an interface and figure out utilization (for a review of network management gadgets, including MRTG, see "SolarWinds Sheds Light on Networks"). Some analyzers can tap directly into a backbone gigabit connection to give you that information in real time. Interestingly, an enterprise analyzer that will do that for 10 Gigabit Ethernet still isn't available.

The bottom line: Either you need more bandwidth or you don't. If you can saturate a gigabit link, you have a problem. There are ways to balance network traffic across multiple gigabit trunks, and sometimes a redesign will mitigate your need by localizing traffic. So you may be able to get by while you wait for the inevitable price drop. On the other hand, you may be in a situation where the fiber to support multiple trunks is more expensive than even the high price of 802.3ae, which can range from $25,000 to $59,000 for Extreme and Foundry. It's also a truism of network design that simpler is better, and 802.3ae does indeed provide a simple, scalable solution for companies that have exhausted their gigabit connections.

Torture Testing

We used an Ixia 1600T chassis-based device to generate various types of Layer 2 and Layer 3 traffic. Ixia's testing equipment is capable of blasting precise patterns of traffic at levels limited only by the theoretical bandwidth of the interface. The 1600T came equipped with two 10 Gigabit Ethernet LAN modules as well as gigabit interfaces. Ixia's IxExplorer GUI and ScriptMate applications made it easy for us to hammer the Extreme and Foundry boxes with a barrage of full-duplex tests. The 10-gigabit modules supported the 1310nm LAN interface, which we requested the vendors provide because it's the most common type of physical interface used with 802.3ae (see "10 Gig Can't Wait to Interoperate" for an explanation of the different interfaces). We set up our tests to have zero tolerance for packet drops: If even one packet were lost, the test would fail.We began with standard throughput tests at various packet sizes. The smaller the packet, the harder the device under test had to work because it needed to inspect each header for an Ethernet address in the case of Layer 2 and for an IP address in the case of Layer 3. Then the device had to look up the addresses in their respective tables. Because there's a header on every packet, and more packets can be sent in the same amount of time if they're small, this method resulted in a corresponding increase in headers requiring inspection. Both boxes handled these tests at both layers without a hiccup. As expected, they performed at roughly 8 Gbps because of the bottleneck between each card and the backplane. We ratcheted the test up by turning on a 100-line access list on each box. We set up the access list so that the IP packets made it do a lookup on all 100 lines for every packet that came through, with a permit on the very last line. We also cranked up the difficulty on the Ixia box by setting it so that as it was doing the throughput tests, it would cycle through 10,000 unique IP addresses. This challenges the device under test, which will attempt to cache IP flows to increase the efficiency of the lookups. We were impressed that, in spite of this stress, both the Extreme and the Foundry boxes nailed this test, maintaining the exact same levels of throughput they experienced without access lists turned on.

We then turned on QoS and sent alternating high- and low-priority packets with corresponding settings to the ToS (Type of Service)/DiffServ (Differentiated Services) bits. We observed how well the devices could handle having their bandwidths oversubscribed. To accomplish this, we added an extra 1 gigabit of input, so that the ports capable of 8 gigabits of output had to deal with the normal maximum traffic plus an extra gigabit's worth. We made sure there was always enough bandwidth to handle the high-priority traffic by itself, then checked to see if all the high-priority traffic arrived. We also varied the number of high- and low-priority packets sent at one time. We started by alternating between three high- and three low-priority packets, which is the lowest the Ixia device could handle, and worked our way up, sending larger bursts of each kind of traffic. Both the BlackDiamond and BigIron did fine ... until we started hitting 500-packet bursts. At this point, the Foundry box started dropping some high-priority packets. By the time we got to 10,000-packet bursts, the BigIron didn't appear to be giving any preference to the high-priority packets. The company said it performed a similar test (with better results) with a Spirent Communications' SmartBits box, but it was with beta software. We suspect that the SmartBits test was sending different traffic patterns.



Test Setups
click to enlarge

It's worth mentioning that the Foundry box did better with QoS tests when we exceeded the 8-gigabit capacity on the input ports with the Ixia tester, which was easily able to generate a full 10 Gbps of traffic. Because the BigIron does QoS on incoming traffic, it performed better than the BlackDiamond. Extreme doesn't prioritize traffic as it comes into the interface but pointed out that it would have done better if flow control were turned on. In reality, it's unlikely either box would be connected to another device capable of generating a full 10 Gbps of traffic, though it's likely that each vendor will release next-generation equipment that doesn't have the 8-gigabit constraint. We also discovered that the Foundry BigIron is set up so that it always gives higher priority to traffic from 1-gigabit ports than to traffic from 10-gigabit ports. We'd prefer more flexibility.

Interoperability Tests

While it's all well and good to adhere to a standard, it's not worth much if you can't play nice with other vendors' devices in the real world. With this in mind, we plugged the Foundry and Extreme boxes directly into each other via one 10-gigabit port each. We used the remaining 10-gigabit ports to connect each box back to the Ixia tester. We then ran throughput tests for all the packet sizes and found no compromise in performance.

Not wanting to let them off so easy, we dug up a Net Optics tap to work into the mix. The Net Optics device is designed to tap into the data stream and reproduce it out another port for an analyzer without causing disruption. We put the Net Optics tap between the switches and successfully reran the throughput tests. This proved not only that the Foundry and Extreme switches could interoperate with another vendor's product, Net Optics' tap, but that the Net Optics box is able to tap into a 10-gigabit connection without interfering with performance. When an 802.3ae analyzer does become available, the tap will be ready for Foundry and Extreme.

While we were impressed with both products, we gave our Editor's Choice nod to Extreme because the BlackDiamond edged out Foundry's BigIron by offering slightly superior performance with QoS enabled and better pricing. However, we would not hesitate to recommend either device--the vendors comprise an elite group willing to subject their equipment to rigorous testing. Think of it this way: It's nice to win the Super Bowl, but it's also an accomplishment to make it to the game and more than hold your own. Extreme Networks BlackDiamond 6808 | Foundry Networks BigIron

Extreme Networks BlackDiamond 6808



Extreme sent us a high-end BlackDiamond switch/ router with redundant CPU cards and eight slots for port cards. Our eight-slot BlackDiamond 6808 had a 128-gigabit backplane and was populated with two single-port 10-gigabit cards and an 8-port Gigabit Ethernet card. Extreme also has a 16-slot version with a 256-gigabit backplane.

We used the CLI (command-line interface) for all configuration. In spite of this, we found the setup straightforward. We appreciated the fact that we could start typing a command and hit a tab key to see the next set of options, and we were able to step through complex commands easily. We also liked being able to see running counters with the number of matches to all access-list conditions. The Extreme box has a full compliment of routing protocol and multicast routing support.

The BlackDiamond didn't drop a single packet, even in our most rigorous performance tests. It also came with very inexpensive 10-gigabit interfaces, though the numbers we were quoted were based on a "sale" price. We always find it a little suspect to see cut-rate network equipment, whose prices often have very short half-lives. Furthermore, actual prices of network hardware tends to be very negotiable, which is why we gave cost only a 5 percent weighting in our report card.

BlackDiamond 6808, $24,995. Extreme Networks, (800) 867-0429, (408) 579-2800. www.extremenetworks.com

Web Links

Running the Numbers (InformationWeek, Sept. 30, 2002)

Big Fat Bandwidth (Network Computing, June 24, 2002)


Foundry Networks BigIron



The Foundry switch also came with redundant CPU cards and slots for seven additional cards. Our test box included two single-port, 10-gigabit 802.3ae cards as well as a Gigabit Ethernet card. The Foundry platform uses an IOS-like CLI, which was very familiar and made configuration of the BigIron easy. We found, however, that the device could not give us statistics on access lists without our logging them to a syslog server. We didn't turn on this feature because it would have compromised performance in our testing.

The Foundry box had the best variety of 10 Gigabit Ethernet interfaces. Along with the Long Range (1310nm) interface, which we required for our test, BigIron supports the Short Range and Extended Range LAN interfaces. Extreme's device supports only the Long Range Interfaces but the company says it plans to add support for the others. Interestingly, neither vendor supports the 802.3ae WAN interfaces, which are designed for providing a more economical transition to Sonet networks.

Foundry BigIron, $59,995. Foundry Networks, (408) 586-1700. www.foundrynetworks.com

Peter Morrissey is full-time faculty member of Syracuse University's School of INformation Studies, and a contributing editor and columinist for Network Computing. Write to him at [email protected].Those of you who slaver over the biggest and baddest in networking gear have likely been keeping an eye on the 10 Gigabit Ethernet standard. Now that the spec is final, we decided to throw packets at 10 Gigabit boxes to see how they'd perform. We invited Alcatel, Cisco Systems, Enterasys Networks, Extreme Networks, Force10 Networks, Foundry Networks, Hewlett-Packard Co. and Nortel Networks to participate in our tests, but when push came to shove, only Extreme and Foundry came through with products

Although backplane connection limitations hold throughput to 8 Gbps, if you've inundated a gigabit backbone, that's plenty. The sticking point is price--tens of thousands of dollars per connection. Multiple gigabit trunks may do the trick, and if you can wait, prices will drop. But if your critical links are nearing capacity, you'll have to ante up.Extreme won our Editor's Choice award by a nose, but we were very impressed by both devices and would recommend either. We also give both vendors kudos for stepping up and subjecting their switches to our tests. In that sense, they're both winners.

R E V I E W

10 Gigabit Switches



Sorry,
your browser
is not Java
enabled



------------------->


Welcome to

NETWORK COMPUTING's Interactive Report Card, v2. To launch it, click on the Interactive Report Card ® icon above. The program components take a few moments to load.

Once launched, enter your own product feature weights and click the Recalc button. The Interactive Report Card ® will re-sort (and re-grade!) the products based on the new category weights you entered.

Click here for more information about our Interactive Report Card ®.


SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights