Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Testing Enterprise Wireless: Good, Bad and Ugly

That point was reinforced recently when we published, in the February 17, 2005
issue of Network Computing, our analysis of the WLAN switch market,

including a product review by Frank Bulk of our Syracuse lab. Now that
the post-review storm has settled a bit, it seemed like a good time for
some reflection about the power and limits of independent technology
bakeoffs.

Read On

Our Process

Although we plan our edit calendar at least a year in advance, we
usually have about six months to plan and execute a typical Network
Computing cover story and product review. Drawing on our understanding
of the technology, we define initial evaluation criteria and a draft
test-plan. From a big-picture perspective, we evaluate key system
attributes critical to enterprise IT, including performance,
scalability, manageability, availability and cost-of-ownership. Where
possible, we use automated test tools, often supplemented by field
testing, and we usually develop pricing scenarios based on MSRP to
provide objective cost comparisons knowing that the actual acquisition
cost is almost always negotiable. Like price comparisons, benchmarking
provides us with comparative data, but it is at best a simulation of
reality, and we always try to point out limitations. Still, we feel we
are able to provide IT professionals with objective information relevant
to their own product selection.

Beyond complaints from vendors whose products don't win our Editor's
Choice designation, experienced IT professionals often react to our
reviews, pointing out weaknesses in test tools and methodology and often
critiquing our weighting of evaluation criteria. Those comments led us
to develop our Interactive Report Card, which allows readers to tailor
evaluation weightings to their environment. We've also hired some of our
biggest critics over the years and challenged them to do a better job.

  • 1