Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Evaluating Networking Products: Best Practices to Get Behind the Sales Pitch

As a consultant, my clients often ask me to assist them with product selection. Occasionally, a customer will ask my opinion after talking with a sales representative for a networking vendor other than their incumbent. Often, the sales rep will suggest that the equipment he's selling works "just like Cisco" but costs significantly less.

He or she may also highlight their "industry-standard command line interface" to claim that no retraining of network operations staff is required. This proposition of lower-cost networking equipment that works just like the more expensive stuff and requires no re-training or difficult integration is usually very appealing to IT management.

Recent examples include:

● Dell Force 10 and PowerConnect vs. Cisco Catalyst or Nexus

● Arista Networks vs. Cisco Catalyst 6500 or Cisco Nexus

● HP ProCurve vs. Cisco Catalyst

In these situations, whether you are a consultant or an employee helping to make a technology decision, it is important to remain objective and help assess the options without appearing resistive or biased. Simply dismissing the proposed solution with a response such as "Oh, you don't want to use vendor X, trust me," appears resistant, and may be interpreted as a bias toward the incumbent vendor.

In management's eyes, this sort of reaction from the technical staff will not carry much weight. With no other input, the company may choose its product based on price alone, rather than using a holistic view of the overall environment.

Instead, I remind my client that if they are considering staking their data center or campus network on a new platform, they are wise not to take the sales rep's word for it. I offer my assistance and expertise in doing some evaluation. Rather than dive immediately into a field trial or pilot, however, I first consider and research the following:

Feature comparison: I don't mean counting up the total number of features supported by vendors X and Y and declaring a winner. In my experience, any given network environment uses only a handful of the features available on a given platform.

I find it valuable to study the device configurations currently in use, and look through data sheets, configuration guides, and even release notes to make sure a newly proposed platform could perform equivalent tasks and support the same protocols and features.

It may find that some obscure feature important for holding the network together isn't supported or requires an expensive license that the sales rep forgot to mention. Examining configuration guides also provides some insight into just how familiar command structures and interfaces are to what is already in use.

Software stability: Vendor support forums, bug databases (if accessible), general networking forums, and Twitter are good places to search or poll others to find out about software issues. Every vendor has software problems, but it's easy to get a feel for the severity and volume of a particular vendor's software problems, as well as an idea of their responsiveness to raised issues.

Unless a customer has a very extensive testbed network, doing real testing of software stability is a very difficult task. This is one area of research I find better to explore through others' experiences.

Documentation: I request access to the vendor's website and take a look at their documentation. Does it include detailed configuration examples and feature details? Are the release notes helpful in understanding change from one version to another?

I've seen vendor documentation that is little more than a list of each command or menu option and a short bit of text, sometimes little more than the context help in the CLI, briefly mentioning what the feature does. This may result in lots of trial and error, or low confidence in making changes if the nuances of a command are not well explained.

Talent pool: Some vendors are more popular than others. Companies considering a switch to a vendor with a smaller installed base should consider whether they are willing to cultivate product expertise in house, or whether consulting talent for the platform is readily available for hire.

The available talent pool becomes very important when staff changes occur, technical needs change, and during ongoing network growth. Despite what a sales rep may claim, every product line has its nuances, and a body of real-world experience is critical to the successful operation of any platform.

Diagnostic tools: What sorts of diagnostic tools and troubleshooting aids do the platforms provide? I've worked on many routers, switches, and firewalls over the years that have few diagnostic tools besides a ping or traceroute.

Other platforms provide extensive means to examine and debug the state and operation of the device to aid in troubleshooting. I consider the diagnostic capabilities of a networking appliance a critical feature for rapid problem resolution, often undervalued when considering product prices.

Consistency and compatibility: When designing for a greenfield network, this may be a minor concern. When integrating a new product portfolio in an existing campus or data center network, however, consideration must be made for whether the new product will interact gracefully with the existing network or what steps may need to be taken to ensure compatible behavior between vendors.

For example, inserting non-Cisco switches into a Cisco-based network or vice versa may have implications on Spanning Tree Protocol configuration to avoid sub-optimal switching paths. Just because two product lines share a feature does not mean they use the same defaults, and that may require careful planning during deployment or even pilot testing to minimize disruptions.

After evaluating the proposed platform on the above criteria, I am able to help my client make a more informed decision about whether a competitive product is likely to be a good fit and appropriate for their environment.

By performing research and evaluating a product against the installed platform, we avoided spending the time and energy -- or taking on the risk -- of doing a pilot deployment. And we did it objectively, without showing resistance or appearing biased toward a particular vendor.

If an alternative vendor looks like a good fit after this analysis, we then have a better idea of the integration effort and the relative value proposition of a migration. Finally, then, it's time to get some test gear and see how things go in the real world.