Videoconferencing On Demand: Maybe Vidyo, Not Cisco

By virtualizing its video router, Vidyo adds the powerful dimension of scalability to its conferencing system. But it'll take some bigger players to make the technology pervasive.

Art Wittmann

November 8, 2011

7 Min Read
Network Computing logo

Will videoconferencing ever be as pervasive as voice-only conferencing? For years, the argument ended quickly with a quick look at the quality versus cost curve. If you want a grand room enabled for what Cisco has termed telepresence, it'll cost you a few hundred thousand dollars per copy and you'll need a nice fat dedicated network to make it work. You'll have to leave the corporate jet idling for a quite a while to justify the expense based on travel savings alone. On the flip side, point-to-point Skype-style videoconferencing hasn't been a rich enough experience to displace the phone--though that's rapidly changing.

If the economics were taken out of question--if you really could do high-quality videoconferencing for near the price of phone conferencing--the discussion quickly changes to one of user preferences and the value of actually seeing each other. We'll leave that discussion for another time.

Depending on who you talk to, the economics of video conferencing is changing radically. The latest evidence of this is Vidyo's announcement Tuesday that it has certified its "video router" (what others call an MCU--more on that in a bit) to run in a virtual machine. Vidyo says it will release products using the new capability in 2012, and that these will mostly be aimed at service providers and large enterprises. The virtualized capability is an interesting one for Vidyo and, while I haven't seen the actual products the company will release, I think there may well be a significant play for it with enterprises of all sizes.

[ Learn more. Check out our report on Enterprise Video: A Viable Option? ]

Exactly what impact Vidyo's announcement will have on the market depends on a lot of variables, including such wildcards as how the company packages and deploys the new technology and how industry giants like Cisco, Microsoft, and Polycom evolve their strategies.

It's likely that many of you aren't familiar with Vidyo, but for those of you who are familiar, and who've sat through an evangelical presentation by the company, you know that the company is convinced it has a better way to do videoconferencing. At least at an architectural level, they in fact do have a better way. If that sounds like a damning of architectures of the big guys like Cisco and Polycom, then good, because that's how I mean it.

If you could start with clean slate and design a videoconferencing protocol, how would it work? Here are some likely design goals. First, you'd want to support a lot of different end points with lots of different capabilities--some will have dedicated large screen video, while others will run the video in windows on laptops or even mobile devices. You'd want to be able to simultaneously support end points with lots bandwidth as well as those on Wi-Fi, cable modems, and DSL lines, and even those on 3G wireless connections.

You'd start by recognizing that with the possible exception of a few phones, end points, with their dual core ARM/Xeon level CPUs, have all the horsepower you'd want to do complex video encoding. If you could, you'd encode the video just once, and do it in such a way that you'd never have to decode and encode it again. Once the video is encoded, you'd want a device at the network layer that knew about the various end points, and that send them the appropriate data stream for their device. This would be largely a routing function, and a matter of selecting the right stuff from the high quality encoding stream that was appropriate for each end point.This is not how Cisco and Polycom do it. These vendors use the approach to take the encoded source, decode it at a central network device, and then re-encode the video stream for the various endpoints--a process called transcoding. Transcoding is done in real time, so it takes a lot of horsepower at that central device, and can add some latency to the video stream. Incumbent videoconferencing vendors do this partially for historical reasons and partially for protection of their own revenue streams. If they got rid of their MCUs in favor of a system that simply sent out portions of an already encoded video stream, they'd no longer need most of the proprietary hardware that makes MCUs a high cost, high margin product.

Another thing that's prohibited the ideal encode-once architecture described above was the lack of an encoding standard that actually did what we've described. Here some of Vidyo's principals and others have worked with ITU and MPEG to build such a standard: H.264 SVC. H.264 is the high def encoding standard that's been in use for years. It's used for everything from satellite and cable TV decoders to video surveillance systems to videoconferencing. The SVC part is comparatively new, and specifically seeks to do exactly what we'd want.

SVC stands for Scalable Video Codec. It starts by creating a "base layer"--which is a low resolution, low frame rate (and therefore low bandwidth) version of the video scream, and then adds "enhancement layers" to increase resolution, frame rate and quality. Once the video stream is encoded, distribution to end points just requires sending the base layer along with the appropriate enhancement layers. Using SVC, there's no need for transcoding, so the job of that central box moves to managing the network connection--something that only it can do, and something it's well suited to do.

This brings up another failing of the typical MCU-based system. If network connections degrade or improve in quality, the "video router" (as Vidyo calls it) can either predictively drop packets or move the connection to different levels within the SVC standard. Flexibly changing resolution and frame rate isn't something that most legacy MCU systems can do. As a result, they're often implemented on top of very expensive private networks. The network costs for a fairly complex videoconferencing system can be just as prohibitive as the cost of MCU hardware and fancy telepresence rooms.

With its new announcement of a virtualized video router, Vidyo believes that service providers will be able to use the company's technology pervasively across their networks. Ideally, the video router sits fairly close to the video sources so that smart routing decisions can be made without incurring too many hops. For service providers, the virtualized version of the video router can be implemented throughout their networks without buying Vidyo hardware. The virtualized instance of the video router would also be an interesting addition to multiservice edge routers in enterprise networks of any size. One can imagine use of centralized control with distributed processing as a good way to reduce wide area network demands.

There's no doubt that Moore's Law is on Vidyo's side in the debate about architecture. Encoding and decoding at the end points can be CPU intensive, but most of today's devices can handle it. Even phones and tablets with dual core ARM processors can do the job. But the Vidyo system isn't without some challenges. As with other videoconferencing systems, a major impediment revolves around bringing third-party users into Vidyo-based environment for impromptu conferences. The Vidyo system requires software to be installed at the end point, which can be a problem on locked down corporate laptops. That problem certainly isn't unique to Vidyo, but it's a limitation on making videoconferencing as pervasive as audio conferencing.

Polycom, for its part, made a number of announcements about its intent to support SVC last year, saying that it intended to support SVC in clients such as phones and tablets, though it wasn't clear that the company would use the technology beyond those applications. At the time, Microsoft seemed set to support Polycom's version of SVC (that there are "versions" among vendors is a problem all in itself).

Microsoft supports SVC in Lync, but while H.264 SVC is a ratified standard, implementations aren't necessarily compatible through the entire networking stack. In order for the technology to take off, it'll take multiple vendors embracing the technology and deciding that compatibility between systems is a worthwhile goal. The Unified Communications Interoperability Forum is probably the best bet for that. It was founded by HP, Microsoft, Polycom, and LifeSize, and includes Vidyo, AMD, Broadcom, and others--notably absent however, is Cisco. For a look at what interoperability will require, take a look at this nojitter story.

Art Wittmann is director of InformationWeek Reports, a portfolio of decision-support tools and research reports. You can write to him at [email protected].

To find out more about Art Wittmann, please visit his page.

More than 100 major reports will be released this year, all free with registration. Sign up for InformationWeek Reports now!/a>.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights