Special Coverage Series

Network Computing

Special Coverage Series

Commentary

Lee H. Badman
Lee H. Badman Network Computing Blogger

Interop Preview: When Good Wireless Feels Bad

Users are quick to condemn wireless when they experience problems, but the WLAN is often not to blame. Find out what you can do about it at Interop New York.

Users of a well-designed corporate or campus WLAN can get a bit spoiled. They get used to high performance and reliability to the point where they forget that wireless is a bolt-on to the wired network. They want to use any device, at any time, in any location. They are blissfully ignorant of the thousands of moving parts that perform the amazing technical ballet that makes the WLAN and wired network tick.

When problems hit, users immediately cry that the wireless network is letting them down--regardless of what's really happening. This is the subject of my session at Interop New York, "When Users Think Your Good WLAN Is Bad."

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

I'll talk about the non-WLAN things that can bork out and result in trouble tickets for your wireless networks. And that list can be long, depending on how big the overall enterprise is. From load balancers to DNS and the 3G side of iPads to poorly orchestrated broadcast applications, your WLAN is going to get blamed for things it couldn't do even if it wanted to.

To the client, it's all the same. To you, it can be maddening unless you learn how to deal with inevitable, mistaken cries of, "Your network sucks."

[Sports stadiums and other large venues are beefing up WLANs. Get details on the latest developments in "Philadelphia Eagles Join Stadium Wi-Fi Stampede."]

And just how do you counter false alarms? How do you keep the train of perception on the rails as well as you keep the network healthy? I'll lay out a tapestry of approaches that I use in my day job at Syracuse University. These include communications and management buy-in, as well as thickening of the skin. There is no one-size-fits-all approach, but I can share common concerns that will apply regardless of organizational makeup or network design.

Mobility is expanding at an insane pace, wireless is hot and only getting hotter, and Ethernet is being marginalized in favor of the power of portability demanded by a generation of cool new devices. At the same time, the laws of physics dictate that wireless can never be as fast or as statistically reliable as wired Ethernet.

Ideally the WLAN would just "be there" day in and day out. In general, that's usually the case, but problems do arise. Those things are seldom attributable the WLAN itself, yet I spend my share of time defending the WLAN. I'm looking forward to talking about it with you at Interop.



Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

2013 SDN Survey: Growing Pains

2013 SDN Survey: Growing Pains

In this report, we'll look at the state of Software-defined networking in the marketplace and examine the survey results. We'll also outline what a typical SDN infrastructure looks like in 2013, including a controller, programmable infrastructure, applications and network multitenancy. We'll dig into both OpenDaylight and OpenFlow, two open source initiatives that are shaping SDN from outside. Finally, We'll offer guidance for network professionals on how to approach bringing SDN into their own environments.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.