One of the most interesting aspects of Interop is the Interop Labs area on the show floor, which highlights new and emerging technologies. In years past these technologies have included VPNs, network access control systems and various Trusted Computing Group projects. This year, InteropNet's OpenFlow Lab is going to showcase OpenFlow, with live demonstrations of the OpenFlow protocol. Openflow may well become an important networking protocol that will have significant impact on how you can deploy and manage networks.
"Openflow is now mature enough to be investigated or adopted by enterprises. As switch vendors start to ship product, enterprises will have a chance to experiment with OpenFlow on their own, which provides an important feedback loop to tell the OpenFlow Consortium to help evolve and improve the specification," says Jed Daniels, director of product development at OPNET Technologies and team lead for the InteropNet OpenFlow Labs. While InteropNet doesn't have the names of the vendor participants yet, the demonstration will be running on equipment provided by vendors that support the OpenFlow specification. The labs will demonstrate loop free networking, dynamic load balancing across multiple links and quality of service for VoIP.
OpenFlow is a network protocol, being developed at Stanford University, that manages the flow tables on network switches independently of the existing switch software. A flow describes a unidirectional set of frames or packets that share similar traits, such as source and destination MAC addresses, IP addresses, IP addresses and TCP/UDP ports, or anything that can uniquely differentiate one set of related frames or packets from another. The flow table in switches and routers is what commonly determines where a frame or packet is sent next.
Think of OpenFlow as a controller-based network where the forwarding and routing decisions are made centrally, similar to how controller-based wireless access points from vendors such as Aruba, Cisco and Motorola work. Greg Ferro has a nice writeup on Controller Based Networks for Data Centres that is worth a read.
Contrast the controller-based model with how network equipment is deployed today--every device discovers its own neighborhood or part of the world and individually makes forwarding decisions based on its own knowledge. Part of the complexity in routing and forwarding protocols and algorithms is ensuring that, independently, each participating device will build a coherent forwarding model. When the topology changes, it takes time for the devices to converge on the new topology. Applying rules to traffic such as quality of service, access control rules or load balancing is more difficult than it should be.Mike Fratto is a principal analyst at Current Analysis, covering the Enterprise Networking and Data Center Technology markets. Prior to that, Mike was with UBM Tech for 15 years, and served as editor of Network Computing. He was also lead analyst for InformationWeek Analytics ... View Full Bio