Greg Ferro


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

I'm Your Customer, Not Your QA Department!

Bugs happen. It's a fact of life. We accept them. We plan for them. We find them. We escalate them to vendors. Then we workaround. We hack. We patch. And we hope that we can mitigate the impact. Bugs are operational failures by vendors and that not only wastes company's time tracking down, but has the long term impact of stopping companies from adopting new products and features for the sake of reliability. Here are 4 misconceptions vendors tell themselves, and us, about software bugs.

1. Software is too complex to find all the bugs.
This is true. Software is so complex, and beyond human comprehension, that bugs will occur. However, this does NOT mean that bugs are acceptable. And bugs that could be detected using good procedure and controlled testing should never occur in public.

2. Bugs are OK as long as vendor support is responsive and has good support people.
No. It's not OK. I don't want to waste my time finding, reporting and fixing your bugs. I want to spend my time making things work, not troubleshooting your equipment and software that shouldn't be broken. I should never have to call your support about a software bug because it should work correctly and reliably.

In fact, if your support organization is "best in world" then perhaps you have so many bugs and failures that you are hiding your product problems.

3. Customers need to do their own testing and prove the product works.
True. Up to a point. A customer should take responsibility for ensuring that the product meets their needs and is used in the intended manner. This means selecting the right products, installing them correctly, ensuring the products integrates well with the rest of the network and meets performance goals.

But networking vendors include software QA-type testing when telling customers they should conduct their own tests. As a metaphor: When I need a new car, I'd choose family sedan that has good mileage and reputable name. I do not expect to take that brand new vehicle to a garage for testing to ensure that it works correctly.

Not only is it not cost effective for customers to find bugs, doing so undermines the value of the product. Vendors should make products that are reliable. Products should not need QA-type testing by customers once the products have shipped (except for extreme situations).

4. Bugs are a fact of life. Everyone has them.
Finding bugs in network software is expensive for the customer whose goals are reliable, fast networks that support other IT goals. When a network fails due to a software bug, IT's confidence in the reliability and integrity of the product erodes. The eroding confidence in turn leads to resistance to changes in the future such as using new features because IT predicts future failures based on past experiences.

The result is old firmware staying in place because it's stable and new features contained in newer versions of software remain available. In the long term, the network is not delivering new services to the business. Who wants to invest in a failed technology? Bugs are operational failures leading to negative investment in new technology.

Sound familiar? I don't think software quality in the networking industry is given enough focus. I've seen vendors claim that it's more important to have new features and new hardware to meet varying customer demands. And that maybe true, but right now I think we need a renewed focus on software quality. Lets get it right before we make it bigger and faster.


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Next Gen Network Reports

Research and Reports

Network Computing: April 2013



TechWeb Careers