Streaming The Vancouver 2010 Olympics From Mobile To HD
If you watch the Olympics from your Web browser, the HD video stream will be coming to you live from Vancouver British Columbia with a scant 90 second delay. This feat is orchestrated by iStreamPlanet, and the capabilities to do so were assembled in just twelve days. The company, along with partners like Akamai, Arista Networks, Intel, Microsoft and Switch Communications, put together a fully automated workflow system to stream the Winter Olympics. An operation of this magnitude would normally r
February 10, 2010
If you watch the Olympics from your Web browser, the HD video stream will be coming to you live from Vancouver British Columbia with a scant 90 second delay. This feat is orchestrated by iStreamPlanet, and the capabilities to do so were assembled in just twelve days. The company, along with partners like Akamai, Arista Networks, Intel, Microsoft and Switch Communications, put together a fully automated workflow system to stream the Winter Olympics. An operation of this magnitude would normally require hundreds of people for the duration of the event. For example, thirty people were required to stream January's two-hour "Hope for Haiti Now" telethon. Comparatively, iStream is running closer to 1,500 hours using the same number of people.
Two factors play a central role in streaming the Olympic events: automation and a rock solid network. iStreamPlanet integrated its workflow service with NBC's event schedule so that as cameras begin to roll on the Olympic events and the video starts its journey to the Internet, iStreamPlanet's workflow automation system automatically correlates the event stream with the scheduled event.
As the video arrives at iStreamPlanet in Las Vegas from NBC studios in New York, the automation system inserts ads for the Web player and other programming features and simultaneously encodes the video into six formats, from approximately 400Kbps for highly compressed viewing on handhelds and low bandwidth devices to 3.5Mbps HD quality video at 720P.
iStreamPlanet uses Microsoft's SmoothStream technology in Silverlight, which treats video more like a file transfer than a video stream. Once the event is over, the video is pushed to a content management system, where it becomes available on demand.
In Vancouver, twenty-three video feeds are encoded using H.264 HD and multi-cast at roughly 17Mbps each over an OC-12 (622Mbps) to NBC. In addition, five Canadian TV streams are transmitted via satellite from Vancouver to iStreamPlanet, as well as two video streams from Toronto, for a total of 30 HD streams. After iStreamPlanet processes and encodes the video, it's pushed to an origin farm at Switch Communications' SuperNAP in Las Vegas over a 1Gbps dark fiber connection with a backup 1Gbps connection via Cox Communications.The origin farm--where the video streams are broken into two-second segments--consists of two redundant hardware racks of 23 Dell 710 servers running Intel Nehalem 5500 chips and multiple Intel Gb Ethernet NICs teamed to get 2Gbps throughput per server. The servers are interconnected to Arista 7048 switches via multiple Gb Ethernet connection. The 7100s themselves are connected via 10Gb Ethernet uplinks to the carriers. The incoming video feeds will dump between 350 to 750 Mbps on the servers, which then have to process and render the video for distribution. Once the origin servers begin breaking up the video, it gets sent to six regional areas on Akamai within 10 milliseconds for distribution to viewers. The origin servers are monitored by iStreamPlanet's staff and can cut over to the secondary origin servers at the first sign of trouble.
The Arista 7048 switches were selected for their large memory buffers of 768MB. Latency isn't a major issue on this project, says Arista's Doug Gourlay, because iStream delayed the video streams for 90 seconds. More of a concern are buffering and dropped packets. The distribution to Akamai is via TCP, so a dropped packet causes a retransmit and delay in delivery. When that happens between a viewer and Akamai, it affects that one viewer. When packet loss occurs between the origin servers and Akamai, it affects everyone.
Packets may be dropped during periods of severe network congestion. When a network is clear, switches and routers process packets as they come in, but when congestion occurs, intervening switches and routers will queue up packets in memory buffers until they can be sent. If congestion is bad enough--more data coming in than can be forwarded--switches will selectively drop packets using a number of methods to make room for new traffic. At the origin server, the delay caused by the retransmission of a dropped frame is amplified to all the viewers. Two sources of congestion can occur: the origin servers that are receiving the encoded streams from iStreamPlanet may become overwhelmed and have to reduce the rate they can receive data. or Akamai's distribution servers or any of the intermediate hardware could also reduce the rate that it can receive data. As long as there is available queue, buffers on the 7100s, dropped packets won't be a problem.
During the design phase, iStreamPlanet ran a full speed test for 48 hours with zero packet loss. Now that's an Olympic feat.
You May Also Like