Special Coverage Series

Network Computing

Special Coverage Series

Commentary

Lee H. Badman
Lee H. Badman Network Computing Blogger

The Evolution of Wireless Bridging

Wireless bridges that provide high-speed, site-to-site connectivity have boosted throughput from megabits per second to 100 Gbps, but challenges such as licensing and beam alignment remain.

Wireless bridges are a great option for providing network connectivity between physical locations. On the college campus I help support, there are buildings that need connectivity but can't be reached with fiber. Leased lines or site-to-site VPNs are options, but they can be expensive and complex. Wireless bridges are my solution of choice where clear line-of-sight exists, and I'm seeing a change for the better in my bridging options.

Before we talk products, be aware that in a bridging project, RF knowledge is the key to successful installations. You must understand principles such as free-space path loss, wind-loading, and how distance and modulation affect data. Most failed bridge projects tend to trace back to poor installation or the wrong bridge for the use case, so make sure you don't underestimate the challenge.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

To understand the significance of the bridge options that are available today, it's worth a quick personal history lesson. Back in the days of Cisco's autonomous 802.11b wireless access points (the mid-90s), a bridge version of the venerable model 350 and a high-directional antenna gave networkers a then-powerful weapon for getting a few megabits per second of backbone connectivity to a far-flung building.

Then came Cisco's 1300 (11g) and 1400 (11a) 54-Mbps bridges, and a new mode in its1200-series (and newer) APs that let an access point be turned into a bridge. I've installed all, and still have a 1400 in service that generally works flawlessly to around 14 Mbps at a mile distance. That's double the bandwidth--at no cost beyond initial install--that we would otherwise pay monthly if we opted for a business-class Road Runner connection or similar wired option.

Unfortunately, the 1300s weren't as reliable as Cisco's 1200 APs in bridge mode, and eventually they fell victim to the crowded 2.4-GHz space as Wi-Fi signals spread across our campus and the neighborhoods that we shoot our bridges across.

In 2010, Cisco stopped developing its own bridges and partnered with Exalt to sell and support the Exalt r5005. The r5005 has been a dream to use, and provides quick ROI versus paying for local ISP connectivity and then finding a logical solution to get back to the home network.

The r5005 works in unlicensed 5-GHz spectrum, and provides 80 Mbps in each direction (aggregate 160 Mbps) to 3 miles when aligned right. This has become my bridge of choice for many point-to-point needs, as most installers find it to be a breeze to install and align. But we're still talking networking here, and sometimes you need even more bandwidth.

With roots in an overseas project I was involved in, my team ended up with a licensed-spectrum link, a carrier-grade Bridgewave FE80XU. Providing 100-Mbps full-duplex in the 80-GHz spectrum, this was my first foray into dealing with the licensing process for bridge links.

Should you opt for a licensed solution, expect both delay and additional cost for the FCC-required paperwork. Also be advised; aligning a link with a tight beamwidth between endpoints can kick your butt. For perspective, the Exalt r5005 has a 10-degree beamwidth, where the Bridgewave FE80XU is .4 degrees (yes, four-tenths of one degree) and is therefore almost a laser beam at a distance of one mile. This is a challenge to line up if you've never done it before.

Invariably, backbone links that approach or exceed Gigabit are needed in larger remote sites, even when wireless bridging is the only path to the core. My first Gig bridge was a short-range, unlicensed Bridgewave AR60 which struggles beyond around a few thousand feet. Working in the 60-GHz spectrum is a curious study on the lofty topic of oxygen absorption. This bridge proved to be the most difficult to align that my team has ever installed, but it delivered the promised Gig connectivity nicely.

[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]

You learn as you go in this space, and our next high-capacity bridge link was also licensed, in the 18-GHz space, with the ExtremeAir from Exalt. With the same slick interface as the r5005, this Exalt bridge was also great to work with, and currently provides a remote building full of happy users with a reliable full-duplex 700 Mbps of wireless uplink to the campus network.

To keep installations and performance consistent, it's good to standardize as much as possible. At the same time, you have to stay on top of what's out there as available bandwidth increases at more attractive price points.

We're getting ready to evaluate the Ubiquity AirFiber, which promises the best of all worlds: low-cost, license-free (in 24 GHz), long distance, and Gigabit+ aggregate speeds. We also are looking at LigoWave 2x2 MiMO 5-GHz bridges as a contender to the Exalt r5005, as it comes highly recommended for similar performance at a fraction of the cost.

My own point-to-point story is just the tip of the iceberg for product options. Ruckus and Aruba customers have native bridging options, but sometimes these products can't be centrally managed alongside APs with the vendors' native management tools (check before you purchase).

There are open-source-based bridges (Linux/Free BSD, DD-WRT and OpenWRT) to be had for well under $100 if you feel adventurous, and there are also high-capacity, six-figure, free-space optics (laser) options for hyper-critical links.

Don't let the lack of fiber to a remote building get you down, and don't get sucked into costly ISP connections if wireless bridging is an option.



Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

Research: 2014 State of Server Technology

Research: 2014 State of Server Technology

Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, thatís for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.