Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Analysis: Video in the Enterprise: Page 4 of 9

Streaming video is almost always an HTTP/TCP application. The video server will throttle bandwidth based on the amount of capacity selected for the session. However, download-and-play video is a file-transfer operation, and will use all the bandwidth it can grab. When we recorded a sample of downloads from YouTube, the average bandwidth used was about 4 Mbps, roughly the same as an SPTS.
The amount of bandwidth you'll use also depends on the path of the video; the goal is to transmit a stream as few times as possible to reach viewers. Two techniques, caching and multicasting, support this goal. Multicasting has been around for a while and it isn't supported on the Internet, so we'll focus on caching.

Video caching works like Web-page caching: An origin server, in this case a video server, creates a unicast stream and sends it to a server near the viewer. At that server, the video file is distributed using two techniques, multicasting and stream splitting. Stream splitting is similar to multicasting but doesn't depend on 224.x.x.x addresses. Cisco and Blue Coat are among the many vendors that provide this capability. Blue Coat says its remote caching servers also provide other services, including viewer authentication and reporting.

Finally, for those contemplating video distribution over wireless, Meru Networks demonstrated at this year's Interop eight HDTV signals being transmitted through a single 802.11n AP. Something to look forward to.

Compression: The Other GOP

The MPEG encoder is usually configured to create a GOP, or group of pictures, using I, P and B frames, each 1/30 second. An I-frame is essentially a JPEG-compressed image. While all I frames could be transported, the bandwidth requirement would be unreasonable. So, the encoder calculates P-frames by removing spatial redundancy from the I-frames. As an example, consider a pitcher's body, with a ball moving from frame 1 to frame 2.

Next, the encoder creates B-frames by removing temporal redundancy—essentially, information that is repeated across frames. In our example, the position of the ball can be predicted in frame 2 by looking at frame 1 and frame 3. So, P-frames are derived from preceding I-frames or P-frames; B-frames are derived from preceding or succeeding I- or P- frames. As a result, if part of an I-frame is lost, the entire GOP is affected.