Special Coverage Series

Network Computing

Special Coverage Series

Commentary


InteropNet: Insights from a 19-Year Vet

A nineteen-year volunteer of InteropNet says it's the people as much as the technology that keep him coming back.

InteropNet is the technological foundation for the Interop conference, providing Wi-Fi for attendees and network connectivity for the exhibitors on the show floor. It's also a live demonstration of real-world interoperability, with equipment and applications from a host of vendors working together in real time throughout the conference.

While the InteropNet is only live for two weeks of the year (one in Las Vegas and one in New York), a lot of work goes on in between those conferences to make it happen.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

One of the people doing that work is Chris Stradtman, principal at Novocaine Networks, who's been volunteering on the InteropNet team for nineteen years. This commitment says a lot about Chris, but it also says a lot about the people who come together to make InteropNet work. I'll let Chris tell you about it in his own words:

I guess you might say I'm one of the silverbacks on the InteropNet team. I've been doing this show since Atlanta 1994. The reason I've returned as a volunteer year after year for almost two decades basically boils down to one thing--the people. I've met some of the brightest, most knowledgeable, and most driven people in the industry.

[InteropNet's hot stage is the final step where hardware and software are assembled and configured. Get details in "Inside InteropNet's Hot Stage."]

The camaraderie seems to derive from the stress and adversity of building InteropNet. In a mobile network of this scale, things go wrong. Sometimes it's your own fault (installing new code onsite shortly before opening day). Sometimes it's completely out of your control (forklift tine going through a DNS server).

Whatever the problem, InteropNet has to go live on time. The show's been on the books for a year or more and people are arriving on schedule. The team will ensure the network will function even if they need to carry the bits to the booths in a bucket. As anyone who's worked with a production network knows, this pressure can cause some late nights and tense moments. Nerves fray and conversations get snippy. But in the end, everybody pulls together to launch InteropNet.

When the show is done, we enjoy the kinship that comes from having made it work. Many of these relationships then extend much farther than the end of the show. I know I've hit the "collective consciousness" of the group for valuable advice many, many times.

In an industry where people tend to sit on knowledge like an egg, this group willingly shares knowledge. Every show I go to, I learn something (many things) from other team members. Most of the team members thrive on the sharing and learning. I've seen interesting ideas sketched out on the back of napkins in the bar that then showed up as RFCs in the IETF or significant products. I've also seen participating vendors turn around and take team member suggestions for product improvements.

It's also a rush to work with the new and bleeding-edge gear and technologies. Several times we've had a switch in the network or the labs with the serial number of "1"--or no serial number at all. We've also worked with products that were basically circuit boards in a cardboard wrapper. From a professional point of view it's been really valuable to have a crystal ball that sees that far into the future.

Yes, there is also a lot of unsexy grunt work to do. I don't know how many CAT5 terminations I've done on the show floors over the years. But you'll find the most experienced and talented engineers terminating cables or "dressing Peds" or carrying boxes of desktop switches across the building side by side with the newbies--because we're a team.

The InteropNet is open for tours and classes throughout the week at the Javits Center. Details are here. Stop by to ask questions or look around. And feel free to inquire about opportunities to participate--we're already starting work for Las Vegas.

Don't miss the workshop "Building Your Network for the Next 10 Years" at Interop New York. Register today!]



Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

2013 SDN Survey: Growing Pains

2013 SDN Survey: Growing Pains

In this report, we'll look at the state of Software-defined networking in the marketplace and examine the survey results. We'll also outline what a typical SDN infrastructure looks like in 2013, including a controller, programmable infrastructure, applications and network multitenancy. We'll dig into both OpenDaylight and OpenFlow, two open source initiatives that are shaping SDN from outside. Finally, We'll offer guidance for network professionals on how to approach bringing SDN into their own environments.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.