Riverbed Expands Support In The Cloud
Mike Fratto, Editor
November 18, 2009
On November 10th the tight-lipped Riverbed announced their intentions about product directions and cloud computing. Riverbed pulled out the stops at the Chelsea Pier in NYC today, before Interop NY 2009 gets going, with a late day announcement that in 2010, the company will ship a virtual Steelhead appliance for cloud environments. Also next year, Riverbed is promising to optimize iSCSI traffic. Both product announcements will expand Riverbed's feature set and is another step forward in WAN optimization. All this amid rumors from Globes OnLine (which, in a weird coincidence, is powered by RadWare, a competitor to both Expand and Riverbed), that the company is in negotiations to acquire Expand Networks.
Riverbed's virtual Steelhead follows similar product announcements by competitors Citrix, with their Netscalar VPX and Expand's Virtual Accelerator, both virtual appliances that can be installed into any hosting environment without the need to install hardware. Targeted specifically at cloud computing Infrastructure as a Service (IaaS), the virtual Steelhead allows companies to optimize traffic to and from a cloud service using a familiar product. Riverbed provides a demonstration of their virtual Steelhead running in Amazon's EC2. Interestingly, the virtual Steelhead interacts well with users who have a Steelhead Mobile client on their computer, because Steelhead Mobile automatically detects that another Steelhead is at the other end and optimizes the traffic without any user intervention. We won't know until the product has been fielded what kind of computing resources are needed to effectively run a virtual Steelhead, but we'd expect Riverbed to give some guidance.
More interesting are Riverbed's plans to optimize iSCSI with data de-duplication and more importantly, protocol optimization. With increasing WAN bandwidth due to carrier Ethernet in the metro and wide area, speeds from 10 Mbps to 1Gpbs are getting more affordable, compared to a MPLS or TDM circuit. When moving large blocks of data across the WAN, Round Trip Time (RTT) is likely to be the limiting performance factor. If your average RTT is 100 milliseconds and there are 1000 round trips that need to be made to transfer a file, the minimum time to move the file is 100 seconds or just a bit more than one and a half minutes. Larger files will have more round trips and take longer to transfer.
Chatty protocols like CIFS or iSCSI can easily have round trips numbering in the thousands, even for moderately sized files. Reducing the number of trips through protocol coalescing, for instance, aggregates a number of smaller packets into a larger packet, optimizing the protocol by keeping the chattiness on the LAN side where latency is below ten milliseconds and reducing the number of round trips on the high latency WAN.
While Riverbed's iSCSI optimization is still in development, they showed a demonstration sending a 23MB video file over a 10 Mbps second WAN link with an 80 millisecond round trip time. Transferring the file over an unoptimized link would take about four minutes to complete. Doing a cold transfer on an optimized link--when the WAN optimizers are seeing the traffic for the first time--took approximately four seconds or roughly six Mbps, due primarily to the protocol optimizations. Video traffic is already compressed and doesn't lend itself to de-duplication within the same file. Of course, once the file was transferred and de-duplicated on both sides, subsequent transfers took less than a second. With that kind of iSCSI performance, moving virtual machines from data center to data center will be much faster.