This extension of SDN is called "transport SDN." The concept: finally give higher-layer network controllers the ability to bring up, reroute and tear down optical links -- the physical plumbing of a wide area network. This isn't possible in the current paradigm, where transport links are typically "nailed up," meaning once they're brought online, they stay that way until the end of time.
IT also overprovisions like crazy. That's because one of the strengths of the OSI networking model is also one of its greatest weaknesses: Everything is separated into layers. Data storms occurring at higher application layers will be invisible to the transport layer at the very bottom of the protocol stack. Because of this, the transport layer must be provisioned for peak traffic, even if that peak occurs less than 1% of the time. Yes, various techniques exist to optimize packet flows between data centers and higher-layer applications running between them, but all of these assume sufficient transport bandwidth already exists, and none of them can move physical links around to follow demand. Making the plumbing layer of the network more dynamic and flexible allows full-layer optimization of data flows between applications, something that simply doesn't exist today.
While the transport SDN concept may be simple, implementation is more difficult. Transport networks are much too complicated and proprietary to be managed by a centralized controller. Even if the controller could handle the level of detail required, the inherent latency of long-haul transport links means the time needed for messages to travel from the controller to the transport nodes would be excessive. So instead, the transport layer is abstracted and presented to a centralized controller as a sort of Reader's Digest version -- a main tenet of transport SDN. Furthermore, the centralized controller exists only as a logical element and is, in reality, both virtualized and distributed across many controllers. This represents quite a leap, and like most technological leaps, it demands business justification. So let's look at four potential use cases of transport SDN.
1. Bandwidth calendaring: Today's bandwidth consumption has evolved into identifiable cycles: Daytime, evening, and overnight traffic patterns are substantially different, just like commutes. Cities have learned to flip the direction of some highway lanes between morning and evening rush hours. Similarly, the transport network can be reconfigured to optimize its configuration throughout the day. For example, if data backups occur between midnight and 4 a.m., optical transport links can be rearranged to maximize capacity between the primary and backup data centers.
2. Follow the sun: This is a where peak business hours shift with the time zones. Early morning in North America sees lots of trans-Atlantic traffic between Europe and the East Coast. As the business day begins in North America, traffic increases across the continent from east to west, with trans-Atlantic traffic waning, and eventually trans-Pacific traffic increasing. During the North America evenings, when most traffic is video streaming to home televisions, Asia is just starting its business day, and trans-Pacific traffic spikes. Every 24 hours, this pattern is repeated. With static optical circuits, all the circuits have to be provisioned for peak traffic. With transport SDN, optical circuits can be rerouted dynamically.
3. IP off-load: Traffic volume is not always so predictable, however. While calendaring bandwidth is helpful, it doesn't help with unexpected spikes. In this third use case, an IP off-load manager is needed. An IP off-load manager monitors the amount of IP traffic flowing through a router port. When a preset threshold is crossed, another end-to-end link is brought up, including provisioning of another optical transport link to off-load the first link. When traffic drops below another set threshold, the additional link is deprovisioned, and all traffic is directed back to the original link. While routers do have a basic ability to respond to traffic spikes, without the ability to command the physical layer to provision additional transport links, the bottleneck just moves down a layer. With a centralized SDN controller directing an abstracted optical transport layer, however, additional optical circuits can be provisioned, along with additional switching capacity, to handle traffic spikes.
4. Inter-data center traffic: Traffic movement among data centers is on the increase. Say you have three widely geographically distributed data centers in which all storage and compute resources have been virtualized, and the virtualization of switching is well underway. When one area center sees a spike in activity, virtual resources are shifted around in big blocks among clusters within a data center to better balance the load. However, this load balancing is easy only within the data center walls. What happens if an entire facility is overloaded, or even brought down by a disaster? In these cases, load balancing among geographically dispersed data centers can save the day. Transport SDN allows the physical transport of resources between data centers to be abstracted and virtualized so that data center-to-data center load balancing can occur. This type of function can be done only by a centralized controller with full visibility of all layers of the network. For example, say the CPU loads in a cluster of virtual machines begins to exceed 80%; a new end-to-end flow path can be provisioned to move the load to a different cluster in a new data center hundreds of miles away. Today, such a move might take weeks or months to accomplish. With transport SDN, you simply push a button and the move is instantaneous.
Currently, transport SDN efforts are in the test bed phase of development. Early open source software and hardware are being run together to not only check for interoperability, but to quantify the potential cost savings of these and other use cases. Testing thus far has clearly demonstrated that flexible plumbing can help keep networks from clogging.
Everything else in our networks has been virtualized. It only makes sense that the time has come for wide area links to join the cloud.