Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Putting Controller-Based Networks' Security Risk In Context: Page 2 of 3

OpenFlow is just a protocol that securely facilitates communication between a controller and the network infrastructure. The OpenFlow controller manipulates the switch/router forwarding tables in the same way individual switch/router manipulates its own forwarding table. Rather than having each switch/router create its own view of the network in to make forwarding decisions, the controller creates a centralized view and then configures the switches to reflect its network view. The process that builds the global forwarding table(s) is independent of the OpenFlow protocol. Vendors and researchers can use whatever they choose to build the forwarding table(s) and then use OpenFlow to configure each switch/router in the network.

The first objection with OpenFlow is that, since it is controller-based, the controller becomes a single point of failure. If that's the case, then don't design or purchase a controller-based system with a single point of failure. There is no reason that OpenFlow controllers can't be clustered, placed in active/active or active/passive fail-over, load balanced or any other model to ensure up-time. This isn't an issue that can't be addressed.

Given the requirement to have critical systems be redundant and resistant to failure, you can bet vendors that adopt OpenFlow will address redundancy and high availability of the controllers. Besides, even if an OpenFlow switch was cut off from the controller, it will still forward traffic based on its current forwarding table. New flows are sent to the controller for forwarding only when there isn't a forwarding entry in the network device--a.k.a. a table miss. Designing your network so that every new flow has to be sent to the controller for disposition would be a bad design.

Using aggregated flows entries, you can design your network in a similar fashion as you design your network today--with subnets talking to subnets and other many-to-many, many-to-one and one-to-many designs. An aggregated flow says "this group of stuff talks to that group of stuff" using IP addresses, TCP/UDP ports or whatever your controller supports. If a new flow enters the switch and it can be part of an aggregated flow, then it doesn't need to go to the controller. Thus, the loss of a controller may not be that disruptive--in the short term, at least.

The second objection is that the OpenFlow controller becomes a single point of attack. If an attacker can get control of the controller, then he or she has the keys to the kingdom. Well, there is no denying that in any controller-based network, guarding the controller is paramount. But that is true for any critical piece of management software, such as network management systems, firewall management systems, VPN systems, hypervisor management systems ... you get the picture.