Wireless Network Head-to-Head: Cisco Vs. Meru

We compared wireless gear from Cisco Systems and Meru Networks in our Real-World Labs. The performance results are valuable, but the real story is whether Meru deviates from the 802.11

November 17, 2006

22 Min Read
Network Computing logo

As WLANS have matured we've seen nearly a hundredfold increase in raw performance. The flip side of all that innovation involves the law of diminishing returns: As a technology matures, vendors have fewer opportunities for revolutionary advances. So when we hear about a company seemingly lapping rivals, our curiosity is piqued.

In this case, the object of our interest is Meru Networks. Although there are variations among gear from Aruba Networks, Cisco Systems and Symbol Technologies, the market leaders, the similarities are more notable than the differences. As most products consolidate around a common WLAN architecture--controller-based thin-AP systems, dual-band services, dense AP deployments and overlapping cell designs--Meru is bucking the trend with unique offerings that reject conventional architectural and deployment models. Instead of a cumbersome multichannel cell design, Meru's single-channel architecture makes deployment easier while promising greater scalability, enhanced roaming and coordinated over-the-air quality of service, a feature it calls Air Traffic Control.

 

 

Download Results Spreadsheets

For further analysis and exploration we are providing the raw performance results obtained in our head to head benchmark of Meru and Cisco. Each spreadsheet contains all of the test results including '802.11b/g Coexistence', 'Unencrypted 802.11g', 'Encrypted 802.11g' and 'Voice and Data Coexistence'. Some notable points include Meru's airtime fairness in which the ratio of their upstream to downstream traffic is nearly 1 to 1. For a description of our test setup see "How We Tested"

• Download Cisco Results Capacity
• Download Meru Results Capacity

 

We've been intrigued with Meru's technology for some time, and after a series of unsuccessful attempts to include it in product reviews, we finally got its gear into our Syracuse University Real-World Labs®. With Meru's encouragement, we tested its product head-to-head against Cisco's, sharing test results with both companies along the way to ensure our analysis was as objective as possible. During our evaluation, we talked to current and prospective Meru customers, hashed out technology issues with senior staff from both vendors, and put the products through a battery of tests, conducted in both a real-world building environment as well as in an RF isolation chamber (see "Testing: Cisco vs. Meru").

Our testing gave us a much better understanding of Meru's setup, and we were impressed by its outstanding Wi-Fi performance. When we threw a combination of wireless VoIP and conventional data traffic at Meru and Cisco gear--a scenario we feel will become increasingly common--Meru offered superior voice quality while maintaining steady data traffic flows. When we evaluated performance in a mixed mode 802.11b/802.11g environment--common today--Meru didn't let 11b traffic cripple our 11g clients, as is typical of most competitive offerings. And while our test bed was too small to systematically evaluate roaming performance, we're convinced that Meru's single channel architecture offers advantages there as well.

But that's only half the story.

Although our performance results support Meru's claims, we spent a lot of time scratching our heads, wondering how the company does it. We've worked with 802.11 for a long time and have a solid understanding of the limitations of the spec's shared-media CSMA/CA architecture. The spec's lack of QoS capabilities is an acknowledged shortcoming, and though the 802.11e enhancements address many of these drawbacks, this standard is not broadly implemented. Yet Meru delivers many of these capabilities, using standard 802.11 clients.

Unfortunately, the company hasn't been entirely forthcoming with details related to how it achieves its results--presumably because it's Meru's equivalent of the formula for Coca-Cola. After performing side-by-side comparisons and analyzing packet traces, we came away suspecting that Meru's secret may leave a bitter aftertaste, especially if a neighboring business is running a Meru system on the same channel as your non-Meru system. Cisco was unambiguous in claiming that Meru is violating 802.11 standards by artificially manipulating the NAV (network allocation vector) value in certain duration fields (see "Duration, Duration, Duration" below). Meru denies these allegations, claiming its products are "100 percent standards-compliant." Based on our understanding of 802.11's virtual carrier sense architecture and the role that duration field values play in managing contention, we find Cisco's charges credible, but we'll reserve final judgment until other industry experts weigh in on this controversial issue.

It's worth noting that this is not simply an issue of RF interference between 802.11 systems. This is an issue of the degree to which one vendor can extend 802.11 standards to prioritize the wireless traffic of its system in a manner such that it diminishes the performance of adjacent systems. Despite claims by Cisco, it isn't clear whether Meru is violating 802.11 standards, but the company's continuing insistence that it "can only go so far" in explaining how it prioritizes traffic leaves it vulnerable to speculation, especially when suspicious protocol behavior is experienced.

Leaving aside the issue of standards compliance, we're still reluctant to offer an enthusiastic recommendation for Meru at this time. Yes, the system outperformed Cisco on our test bed, but we encountered numerous bugs and support problems along the way. Despite repeated requests, we also experienced problems getting Meru to connect us with customers that had large scale installations. We did speak with a very satisfied customer at a small liberal arts college in New York, but Meru was unable to arrange for an interview with any larger customers in time to meet our article deadline. In addition, while Meru may in fact have some creative and effective way to deliver QoS over WLANs, its architecture run counter to industry norms.

Three Strikes?

We've tested hundreds of wireless products over the years and, from the start, our experience with Meru was problematic. To evaluate its architecture and performance, Meru insisted that we test its products in an open-air environment, a challenge given the prevalence of wireless networks at our Syracuse University labs. Interference may be real-world, but you can't do competitive performance testing without controlling for that variable. To accommodate Meru, we conducted initial testing between midnight and 6 a.m., after working with the university IT staff to disable all production WLAN services that might have corrupted our results. Our first round of late-night tests ended with results that failed to meet Meru's expectations. Our second round of tests used an updated configuration script. It failed as well. The third time proved to be a charm, backed by the Meru engineer that had flown in to oversee the testing, the system performed as expected.

Meru's explanation for our first two failed attempts was a defective AP, and we have no reason to doubt that. But the process of troubleshooting was frustrating nonetheless, and it shook our confidence in the company just a bit. A pre-release code bug that resulted in poor performance with WPA2-PSK (Preshared Key) encryption enabled was corrected.

Of equal concern was Meru's responsiveness. Throughout the process, seemingly simple requests for new hardware, updated software, technical support and customer references resulted in lengthy delays and, in some cases, no response at all. And because we weren't able to talk to customers that were pushing large amounts of traffic across a Meru infrastructure, we weren't able to assess whether our lab findings could be generalized to a true production enterprise environment.

Despite a lack of help from the company, we did manage to talk--off the record--to a couple of Meru customers. Both expressed a high level of satisfaction with the company's products, but neither had installations that let us fully assess Meru's performance and scalability claims.

As is normal when we conduct head-to-head tests, we involved both Meru and Cisco in the testing process. As we collected performance results, we discussed them with both companies, and we pressed Cisco to provide an explanation for why Meru performed so much better on several tests. Cisco's reply surprised us: Rather than spinning the results, it suggested that Meru was violating 802.11 standards. More specifically, it shared internal analysis that indicated Meru was manipulating the 802.11 duration field, a key element of 802.11's virtual carrier sense architecture.

Meru dismissed Cisco's allegations, initially suggesting that they were the latest of a long string of false accusations, dating back to the days when Meru and Airespace were competing for attention as start-up enterprise WLAN vendors.

After we painstakingly documented the problem through our own analysis of packet captures, however, Meru's response changed. In an October meeting in our labs, Meru's CEO, Ihab Abu-Hakima, and chief product architect, Joe Epstein, asserted that the results were the manifestation of a bug in Meru's rate-adaptation algorithm. They further insisted that manipulation of the duration field in data frames played no role whatsoever in Meru's competitive performance benefits. Abu-Hakima was apologetic and assured us that the company was making significant new investments in quality assurance. He also chastised its competitors, claiming that they had a long record of misrepresentation.

Meanwhile, Epstein discussed the rate-adaptation bug and spent considerable time answering our questions about other elements of the Meru architecture, including the single-channel design. He also provided us with a lengthy, and quite helpful, interpretation of our test results. We came away from the meeting prepared to give Meru the benefit of the doubt ... until we saw the duration-field issue crop up once more, in a slightly different context.

Back To The Test Bed After our meeting, we began retesting, this time in an RF isolation chamber that removed any possibility of interference from other networks, using the latest Meru code. We shared packet traces with both Cisco and Meru, a standard practice during comparative reviews.

The next day, Cisco came back to us with more evidence of incorrect 802.11 duration fields, this time related to the 802.11 CTS-to-Self (Clear-to-Send-to-Self) standard. Note that though Cisco did look at our traces and point out duration-field anomalies, all testing happened independently in our lab.

When confronted with this new information, Meru clarified its previous position. While we interpreted Meru's initial statement as a denial that it was manipulating duration field values, it subsequently clarified that its response to our initial question was factually correct, that it was not manipulating the duration value of DATA frames. However, it was changing duration values in CONTROL frames. We were told that the company was simply taking advantage of new enhancements to the 802.11 standard included in the 802.11e QoS standard. Specifically, Meru referenced 802.11e's ability to transmit multiple frames using a TXOP (Transmission Opportunity), but our understanding is that this technique is reserved for 802.11e devices. Cisco was more direct, asserting that Meru is not implementing all the protocol elements required for 802.11e standards compliance, notable the "TXOP Limit" field required in 802.11e beacon packets. A subsequent e-mail exchange with Epstein eventually resulted in no response to a question we posed ,about whether Meu was employing all mandaory elements of 802.11e. We interpreted that silence as a decision not to respond to our question. More than a week later, just prior to our filing deadline, we finally heard back from Epstein, who defended the company's products as being fully compliant with 802.11 standards and apologized for the slow response, attributing it to a broken laptop the prevented him from connecting to the company's VPN.

Have we discovered Meru's secret formula? We're not sure. We know that Meru's architecture allows it to exert significant control over the manner in which 802.11 clients gain access to the network, and manipulating the duration field values appears to be a reasonable way to accomplish such a feat. It's conceivable that there are alternative explanations for how Meru is able to prioritize wireless traffic, but the company was unwilling to provide a full explanation, citing that information as proprietary. However, by not disclosing the details, it is inviting industry speculation that it is not adhering to standards."

Not The First Time, Not The Last An obvious question that might be posed by existing or prospective customers: Who cares if Meru is tweaking the standard to improve performance, as long as it works? Given the fact that many organizations operate WLANs within the geographic confines of their own facilities, why should it matter whether Meru's system could interfere with adjacent networks that don't exist?

It's a fair question.

This isn't the first time that a vendor has strayed from standards. Many companies--including Cisco--have "extended" standards with proprietary extensions designed to provide performance benefits. This is one of the ways vendors differentiate themselves.

There have also been instances where vendors have engaged in creative interpretation of 802.11 standards. For example, wireless VoIP market leader SpectraLink has been prioritizing network access of its phones for several years by implementing zero-backoff algorithms that raise questions about 802.11 compliance. Another notable example involves home Wi-Fi gateways that employ proprietary channel-bonding enhancements that negatively impact adjacent networks, regardless of which channel they're operating on. That incident led the Wi-Fi Alliance to establish a "good-neighbor" policy that, if violated, could result in denial or revocation of Wi-Fi certification.

It's worth noting that Meru's system has been certified by the Wi-Fi Alliance and it doesn't rely on any proprietary enhancements to Wi-Fi clients to achieve its performance benefits. It outperforms competitors under certain scenarios using standard off-the-shelf clients. Likewise, it's important to acknowledge that prioritized access is not Meru's only differentiator. Its single-channel architecture also bucks industry norms, and because it does not require complex multichannel site surveys, it provides greatly simplified implementation. Although there is a capacity trade-off associated with going single-channel, for environments focused on coverage rather than capacity, it may be worth it.

In addition, high capacity environments looking to deploy Meru's architecture can leverage its multiradio Radio Switch. Or so the company says--we weren't encouraged to test this device, nor were we given customer references to talk to, so we don't know first- or even second-hand how well this works.

Bottom line, market realities suggest that Meru's model is unlikely to win out. With well over 95 percent of deployed products relying on a multichannel architecture, it's highly unlikely that vendors will change direction. Yes, mainstream enterprise WLAN products offered by Aruba, Cisco, Symbol and others include a number of implementation and operational challenges, but there's good reason to believe that these problems will be resolved over time through continued enhancements to standards, if WLANs are to assume the primary role in enterprise networks of the future.

Duration Duration Duration

Networks based on 802.11 standards employ a header field known as the duration field. This field specifies how many microseconds are required to transmit the current packet's subsequent ACK frame, including the required SIFS (short interframe space) interval. The duration field triggers all devices to set their Network Allocation Vector values, restricting them from gaining access to the medium under the rules of 802.11's virtual carrier sense mechanism. Stations on an 802.11 network must wait for the duration field to expire before contending for access to the media. By artificially manipulating the value of the duration field, a vendor can impose more control over media access. Version 3.1.3-7 of Meru's software incorrectly calculated the duration value of nearly every packet. We confirmed the existence of this "bug" by analyzing traces on a production Meru network. Almost every packet we analyzed in a simple HTTP download had a duration value 16 microseconds too long. When confronted with these results, Meru asserted that it was a bug in its duration value calculation algorithm related to 802.11 rate adaptation, a bug that had been corrected.

 


Rude Neighbors
Click to enlarge in another window

The 802.11 standard also lets contention be managed using the CTS-to-Self (Clear-to-Send-to-Self) mechanism. The function of this frame is to reserve the air for an impending frame by including a duration field long enough to cover the data frame, two SIFS intervals and the corresponding ACK frame. Upon analysis of packet captures taken during an 802.11b/g co-existence test on a Meru infrastructure running version 3.2.5.SR1-4, it became apparent that its CTS-to-Self duration values were inflated. The 802.11 standard defines the CTS-to-Self duration value as being long enough to cover one following data frame (often in the range of 300 microseconds), but Meru inflates this value outside of the 802.11 standard to as high as 2,500 microseconds, thereby protecting multiple sequential data frames and monopolizing airtime.

The effects of this duration field inflation are detrimental to adjacent networks. For those 2,500 microseconds being used by the Meru cell, no other client or AP within 802.11 listening range on that radio channel is allowed fair access to the medium.

To quantify the negative impact, we placed a Meru AP and a Cisco AP side by side on the same radio channel in an RF isolation chamber. For a baseline, we first ran the 802.11b/g co-existence test on two collocated Cisco APs and received 6.29 Mbps and 5.54 Mbps, respectively, exhibiting normal 802.11 airtime fairness, as expected. In contrast, the same test run with collocated Cisco and Meru APs yielded 2.74 Mbps for Cisco and 8.02 Mbps for Meru (see "Rude Neighbors," above right). Meru asserted that these results demonstrate their product's superior design. Testing: Cisco Vs. Meru

We tested Cisco's 4402 Controller and LWAPP-based Cisco 1240 AP. Meru's infrastructure comprised a model MC3000 controller and an AP208 AP.

802.11G MULTISTATION PERFORMANCE: UNENCRYPTED AND ENCRYPTED

 


802.11g Multistation Performance
Click to enlarge in another window

The most basic tests we ran against each infrastructure evaluated the performance of a single-celled 802.11b/g AP with eight 802.11g clients associated. This scenario simulated the real-world throughput users would experience in a pure 802.11g environment (though 802.11b was left enabled) with no interference from neighboring APs. Using Chariot, we defined two TCP traffic flows for each client, one for upstream traffic and one for downstream traffic, resulting in a total of 16 measurable traffic flows. The first variant of this test used open and unencrypted security settings, ensuring that the sole throughput-limiting factor was the bandwidth of the wireless connection. Overall aggregate performance for Cisco was 21.9 Mbps, with 21.4 Mbps for Meru (see "802.11g Multistation Performance," left).

The second variant of this test benchmarked the infrastructure's performance with WPA2-PSK enabled under the same conditions as the previous unencrypted test case. In our initial encrypted test iteration, Cisco's infrastructure had an aggregate throughput of 21.7 Mbps. For Meru, we observed aggregate throughput of 15.3 Mbps. Meru explained that its initial poor performance resulted from a bug in its encryption engine related to some non-disabled debug code and supplied us with a new release to correct the problem. With the new code, we observed aggregate performance of 21.1 Mbps on Meru's system.

802.11B/G CO-EXISTENCE

When multiple 802.11b and 802.11g clients communicate concurrently with the same AP, a significant drop in performance occurs because 802.11b clients require more air time to transfer a given amount of data. Although this is becoming less of a problem as 802.11b clients age out of production on enterprises, many app-specific devices, such as most wireless VoIP handsets and PDAs, still rely on 802.11b radios. These problems also are likely to arise when 802.11g and 802.11n devices co-exist on networks.


802.11b/g Co-existence
Click to enlarge in another window

To evaluate the performance impact of 802.11b devices on an 802.11g infrastructure, we associated eight laptops to a single AP, with six in the default 802.11b/g mode while two were manually configured to use only 802.11b. We generated two Chariot TCP traffic flows for each laptop, one for upstream traffic and one for downstream traffic, resulting in a total of 16 measurable traffic flows.

Aggregate throughput for the 802.11b clients was 2.314 Mbps when using Cisco infrastructure and 0.33 Mbps when using Meru. Aggregate throughput for 802.11g clients was 9.039 Mbps with Cisco and 12.702 Mbps with Meru. Combined aggregate throughput of both 802.11b and 802.11g clients was 11.353 Mbps for Cisco and 13.032 Mbps for Meru (see "802.11b/g Co-Existence," right).

Meru provided better aggregate throughput in our tests, and the manner through which the two vendors achieve these results is worthy of further explanation. When 802.11b/g clients co-exist in a Cisco infrastructure, packet-fairness based on contention means that all clients compete equally, regardless of their speed. Because 11b clients take more time to transmit packets, they consume more bandwidth than 11g clients. Meru takes a different approach, employing a time-fairness algorithm that lets both 11b and 11g clients transmit for equal time periods. This enhances 802.11g throughput while diminishing 802.11b throughput. In addition, it should be noted that only 802.11b data clients are restricted, whereas 802.11b VoFi handsets are automatically prioritized. For the most part, Meru's time-fairness scheme reflects what most network admins would like to see on their WLANs: Better overall throughput and higher priority for 802.11g traffic.

VOICE AND DATA CO-EXISTENCE

Our voice and data co-existence test case assesses an infrastructure's ability to provide both data services and wireless VoIP communications within the same wireless cell. In this scenario, we included three different client types: data-only laptops, VoIP phones and laptops that simulated VoIP traffic. The data clients consisted of six laptops operating in 802.11b/g mode, each with one upstream traffic flow and one downstream traffic flow. For our VoIP phones, we used 10 802.11b Hitachi-Cable Wireless IPC-5000 SIP phones, connected to one another through our OnDo SIP Server, resulting in a total of five call pairs. To provide a measurable call-quality metric, we used two laptops operating in 802.11b-only mode, each configured with one upstream and one downstream Ixia Chariot VoIP flow to simulate a full-duplex conversation. This test case also required proper QoS configuration on each of the wireless infrastructures. For Cisco, we created a unique ESSID using the "Platinum/Voice" QoS setting, ensuring that the VoIP clients associating to that SSID were given higher priority than data clients. For Meru, we configured a QoS rule for each of the VoIP laptops, but the VoIP phones were classified automatically via pattern-matching rules built into the Meru software.

 


Toll Quality Voice
Click to enlarge in another window

Because this test assesses both data and voice performance, the results are organized into two separate categories with distinctly different measurement scales. The first metric measures the aggregate throughput of the six data-only laptops, with Cisco achieving 3.1 Mbps and Meru achieving 1.3 Mbps. We measured voice-quality results using MOS (Mean Opinion Score) ranging from 1 to 5, with 5 being the best. MOS scores describe voice quality by measuring latency, jitter and lost packets.

We used Chariot to evaluate and assign MOS scores to our simulated VoIP traffic flows, thus providing us with an objective voice-quality measurement. Test results were confirmed by our subjective impression of voice quality using the Hitachi phones. When interpreting MOS scores, it's important to evaluate both upstream and downstream results because the combination is the best reflection of a true full-duplex voice conversation. The results for the Cisco infrastructure were a downstream MOS of 4.315 and an upstream MOS of 3.72. In comparison, the downstream and upstream MOS scores for Meru were 4.28 and 4.145 respectively (see "Toll-Quality Voice," above right).

Meru's superior voice performance comes at the expense of its background data client throughput. In contrast, Cisco's data throughput was more than double that of Meru, but its upstream MOS score fell below the 4.0 threshold often associated with toll-quality voice. The cause for Cisco's poor upstream MOS scores is a lack of upstream QoS mechanisms. Although Cisco's infrastructure supports standards-based QoS through 802.11e and is WMM-certified (Meru's is not), relatively few VoIP clients, including the Hitachi phones we used for testing, support these features. In the future, both voice and data clients will likely support WMM and 802.11e, paving the way for converged wireless networks where real-time applications such as voice and video can be appropriately prioritized both upstream and downstream. How We Tested

ALl testing--including field testing in Hinds Hall and in a campus RF isolation chamber--took place at our Syracuse Real-World Labs®. Field testing was performed between midnight and 6 a.m., after all test and production APs in the building were disabled. We verified clean channel conditions using AirMagnet Laptop Analyzer 6.0 and Cognio's Spectrum Expert for WiFi 2.0 at each test location.

Cisco's performance test used a Model 4402 controller running software release 3.2.114.0 and an LWAPP-based Cisco 1240 AP. Tests with the Meru's infrastructure used a model MC3000 controller running software release 3.2-116 and an AP208 AP. Each vendor's controller and AP were connected to a 24-Port Cisco 3750G switch supplying power over Ethernet to each AP.

Traffic generation and analysis were performed with Ixia's Chariot 5.0 using the TCP-based Throughput.scr script, which repeatedly transfers a 100-KB file for the test's duration. We ran each test for five minutes in Chariot's "batch" mode with endpoint polling disabled.

Our test bed of laptops included two Lenovo T43 ThinkPads with internal Intel Centrino 2915 a/b/g adapters, one HP nc6220 with an internal Intel Centrino 2915 a/b/g adapter, one Dell D610 with an internal Intel Centrino 2915 a/b/g adapter, one Dell C610 with a Cisco CB21AG PCMCIA adapter, two Toshiba Satellite 1200s with Cisco CB21AG PCMCIA adapters, and a Dell D500 with an internal Intel Centrino 2915 adapter. Each laptop was running Microsoft Windows XP SP2 with the firewall disabled. The Windows power scheme used in every test was the Windows-provided "Always On" profile, and each laptop was running on AC power. Each WLAN adapter was configured to Constantly Aware Mode to prevent the wireless radios from hibernating. The wireless VoIP phones used in our tests were Hitachi-Cable Wireless IPC-5000 SIP Phones connected wirelessly to a Windows 2003 Server running Brekeke's OnDO SIP Server 1.5.2.0, which provided call routing between each phone. Each phone call was configured to use the uncompressed G.711 PCM codec, and silence suppression was disabled to ensure a steady flow of packets.

All Network Computing product reviews are conducted by current or former IT professionals in our own Real-World Labs®, according to our own test criteria. Vendor involvement is limited to assistance in configuration and troubleshooting. Network Computing schedules reviews based solely on our editorial judgment of reader needs, and we conduct tests and publish results without vendor influence.

Dave Molta is a Network Computing senior technology editor. He is also assistant dean for technology at the School of Information Studies and director of the Center for Emerging Network Technologies at Syracuse University. Write to him at [email protected].

Jameson Blandford is the lab director at the Center for Emerging Network Technologies at Syracuse University. Write to him at [email protected].

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights