Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Analysis: Video in the Enterprise: Page 7 of 9

Desktop or Room-based Videoconferencing: VC technology comes in two distinct forms: room-to-room and desktop-to-desktop. Desktop VC is well understood; it usually encodes video at a low frame rate and image size, because the output is on a computer screen, and is often HTTP based, although other implementations exist.

Room-based VC is usually two-way but may involve three or more locations and allows for a high degree of realism by using a squares paradigm—think of the old "Hollywood Squares" format. This form of conferencing was originally based on H.320 standards and eventually evolved to H.264 codecs and IP transport. While still generally based on bandwidth allocations that are fractions of a T-1 circuit, generally either 128 Kbps or 384 Kbps, there are two gotchas. First. IP overhead may be 25 percent or higher on smaller frames. In addition, in a three-or-more-party conference session, some bridges create a full path for each square (party) shown on the screen. Other conferencing bridges combine the images and send a single signal to each output device. Bandwidth usage is controlled by setting frame rate and image size (resolution) and controlling subject motion. Note that the newest HD VC implementations are reported to use very high levels of bandwidth, up to 40 Mbps.

Streaming or Streamed Video While its precise definition varies, streaming video is often characterized as much by the four vendors that dominate the market, Microsoft WM9, Adobe Flash, Real Networks Real and Apple Computer QuickTime, as by its terminology and basis in proprietary standards.

To play video from a streamed source, you must have a player that is compatible with the encoder that was used. Streaming architectures depend on a source video server, optional proxy or caching servers, and the player. Large implementations will likely require caching servers, which may be supplied by a CDN (content delivery network) vendor. These networks are an overlay to the underlying transport networks and supply caching capability and other control functions, such as authentication.

While it isn't a technical requirement, common file formats are usually transported over HTTP to allow browser control as well as player control. This is significant because it means that the transport is based on TCP, therefore, such servers will use all of the bandwidth allocated. Because it's based on HTTP, it may be more difficult for IT to isolate the streaming video to limit its bandwidth usage.
That is in sharp contrast to MPEG traffic, which travels in a transport stream based on UDP or UDP/RTP. In either case, there will be a maximum bandwidth requirement dependent on settings in the encoder. Most often, CBR (constant bit rate) transmission is used. For example, if the encoder is set to output 6Mbps, the MPEG bit stream will fill the MPEG transport packets at that rate. If the encoder can't supply enough video or audio bits, stuffing will be inserted. With packet overhead added, total bandwidth will increase by about 5 percent.