Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Fat Fingers

We've all done it, fat fingered a config and not caught it until packets started disappearing or traversing a route they shouldn't have. But generally you find them within the first few days, if not hours, of that last "wr mem" on the router.
That's just what we did in the Green Bay lab, only our somewhat unique configuration - designed specifically to enable multi-speed testing without massive reconfigurations - caused us to not notice for nearly two years.
Sound impossible? Not in this case, but only because of a fairly cool setup of our network back in the days of Joel Conover.

The Green Bay Lab's core backbone sits between a Cisco Catalyst 6500 and an Extreme Summit 7i. The two are physically connected with 1GB fiber and 100Mbps Cat5e. A pair of Cisco 7200 VXRs provides T1 connectivity between the core routers via an uplink from each. The core routers handle all our routing duties, and determine which of the three backbones to use based solely on availability and routing metrics. 1 for 1GB, 2 for 100mbps, and 5 for T1. This enables us to drop a link and the routers immediately failover to the next available speed. (Immediately may be too generous, the Cisco takes a minute or two to deal with the situation, but it's close enough).

A few years ago I did some BGP-based testing that required reconfiguration of both routers. After the test I reconfigured both routers, but apparently reversed the metrics on the 100mbps and T1 routes on the Extreme.

And no one noticed, until Don started testing WAFS products that required we drop the network down to the 100mbps backbone, because all of my testing uses the 1GB backbone, which was configured correctly.

As he started testing, Don would see strange behaviors - an inordinate number of retransmits, incorrect checksums, duplicate ACKs. Tracing the route from client to server would show that outbound packets went the way of the T1 and came back over the 100mbps backbone. Well, the outbound path ignored the Shunra Storm he'd interposed in the 100mbps backbone to introduce latency and congestion for his testing, and caused us all to scratch our heads and wonder why. Running the same tests over the 1GB backbone resulted in no errors, of course, so we were a bit confused as to why the 100mbps link was giving us so many problems.

  • 1