I recently participated in a couple of Webinars that at least partially addressed the continuing issues surrounding the system architecture of enterprise-class wireless LAN systems.
Those of you who have been involved with WLANs for a while may remember about a decade ago when Symbol Technologies (later acquired by Motorola) introduced the WLAN switch, which subsequently evolved into the WLAN controller today at the heart of so many product lines. The idea was simple and brilliant--move key, common functional elements out of the access point (AP) and into the centralized switch or controller. APs get thin, cheap, and simple, and all of the difficult stuff gets done in a central element, akin to an Ethernet switch or even a router, that can be duplicated for improved reliability. It all seems so contemporary, logical, and correct.
Indeed, most WLAN suppliers today offer controllers of one form or another. But there is, in fact, a difference between the controller as box and the controller as function; the latter being what we call the control plane when describing the location of specific functions within a particular architecture.
There are two more planes, the management plane, which really must be centralized, handles planning, configuration, policy definition, monitoring, reporting, and much more, and the data plane, which describes how data moves within an implementation.
The big question with respect to the latter is whether user traffic must flow through the controller or not. In other words, whether the AP is thin or fat. Traditional, or fat, APs require no separate controller. The control plane--which I like to describe as the operating system of a WLAN--is distributed among all APs in a given installation. Data in this case can be directly forwarded as required, which seems more efficient. However, as Keerti Melkote, the CTO of WLAN leader Aruba Networks, once stated in a conference session I attended, after you've spent milliseconds sending data over the air, what's a few more microseconds to send it to a powerful controller where security, traffic prioritization, and QoS (and often more, depending upon product) can be more efficiently managed?
Well, the competition counters, not having to forward data means that the controller (logical or physical) can be much simpler in design and implementation, loading on the network links to the controller is reduced, the controller box itself is minimized or perhaps even eliminated, and the need for controller upgrades as network traffic volumes grow also perhaps eliminated altogether as well. Gee, that sounds pretty good, too.
So the WLAN architecture question comes down to two key decisions. Should the control plane be centralized or distributed, and should data from the AP flow through the controller? With respect to the latter, an increasing number of vendors have adopted what we call a direct forwarding architecture, in which data flows directly from a given AP to its destination and not through the controller. Some APs are "adaptive," exhibiting both modes of operation depending upon policy settings and traffic type. With respect to the former, some vendors have a fully-distributed control plane and no controller box at all. At least one vendor has virtualized the controller entirely, reducing it to software running in a virtual machine. Diversity? You bet, and there's no end in sight.
The reason for that is simple. It's very likely that no single architecture will yield the best result (including in terms of price/performance) for all possible network load scenarios. WLAN performance evaluation is notoriously difficult, given varying radio conditions that defy efforts toward reproducibility, and no one has been able to run the definitive comparative tests, at least not yet. Of course, the continuing rapid evolution of underlying technology, standards, chipsets, and system implementations reduce even detailed comparisons in most cases to an analysis of performance at only a single moment in time, with no long-term insights in the results.
The WLAN architecture question is unlikely to be settled soon, despite the fact that the performance of so many applications depends upon a detailed understanding of the tradeoffs inherent in this discussion. I continue to study advances in benchmarking and other comparative techniques that may eventually yield insight and perhaps even a trend one way or the other. Still, it remains unlikely that we'll end up with a simple, easy answer here, or that we'll be able to reduce the selection of the right product for the job to a simple analytical methodology or picking the answer off a chart.
Craig Mathias is a Principal with Farpoint Group, a wireless and mobile advisory firm based in Ashland, MA. Craig is an internationally recognized expert on wireless communications and mobile computing technologies. He is a well-known industry analyst and frequent speaker at industry conferences and trade shows.