Over time, IT managers can expect the vendor community to take a proactive stance on Gettys' suppositions--if they hold true. It's reasonable to assume that over the next few years, best practices will emerge and BufferBloat may be conquered. But what about installed networking equipment?
In purchasing a WAN optimizer, IT managers should pay attention to the delay imposed by the device on a given packet. And I don't just mean optimizing TCP or perhaps looking at how quickly a WAN optimizer can return an object from cache. I'm speaking about looking at the various operations performed by the WAN optimizer--the compression, the encryption, the deduplication and more--and getting an accurate read on just how much latency is introduced in traversing the system. This will give you a sense of the BufferBloat of that system, a concept Gartner calls "insertion latency".
I know there are several ways to perform WAN deduplication, for example. Some include tokens, others include start/stop instructions. Do some of these methods add more insertion latency than others? What impact does this have on the types of applications that can be optimized? I would think 10 ms or less would be required to support real-time traffic like voice and VDI. I'd welcome voices from the vendor community who are willing to share with us what their delay looks like.
Then there's the more fundamental issue of preventing the packet loss that's contributing to the effect of BufferBloat. The importance of packet loss is a hot area contested by Silver Peak, Riverbed and more, but increasingly it seems that no sane IT manager can ignore the effects of loss on their WANs. Addressing the issues of loss in the near and long terms are essential for network functioning. Those aren't just my words. Go ask Gettys.