Network functions virtualization (NFV) is a concept that erupted just a year and a half after the Open Networking Foundation (ONF) started talking about software-defined networking (SDN). The initial proponents of NFV were major service providers that tried to get network services components that would allow them to build more agile networks at lower costs. They formed an Industry Specification Group within the European Telecommunications Standards Institute (ETSI), which eventually created a number of documents, including the pretty complex NFV Architectural Framework.
Looking at the context in which NFV started, the standardization organization where work takes place, and the complexity of the NFV standard documents, one is tempted to conclude that NFV applies primarily to large service provider environments and has nothing to do in a typical enterprise environment.
Nothing could be further from the truth -- we just need to look at NFV creatively, take its best principles, and forgo the complexity of large-scale fully automated environments.
Which NFV principles are usable in an enterprise environment?
Let’s start with the Wikipedia definition of NFV:
Network Functions Virtualization (NFV) is a network architecture concept that proposes using IT virtualization related technologies to virtualize entire classes of network node functions into building blocks that may be connected, or chained, to create communication services.
In layman terms, NFV is all about deploying network functionality (routing, firewalling, load balancing, WAN optimization, etc.) in virtualized environment (virtual machines or Linux containers). Does that apply to enterprise environments? Of course!
A typical enterprise data center has a number of firewalling and load balancing devices. They are usually implemented as physical appliances with fixed performance. Most enterprises buy these appliances as part of a major project, or as part of a new data center deployment -- in both cases buying appliances that are way beyond current needs to ensure the future increase in traffic won’t overload the appliances.
When a physical appliance fails, you cannot simply restart it on different hardware; you have to order a replacement part, which arrives in few hours or days. In the meantime, your data center has to remain available, which forces you to buy two appliances of the same kind.
In short, we commonly end up buying a pair of 10 Gbps firewalls or load balancers to support 1-2 Gbps of traffic… just because we might see increased traffic in the future, and because one of the devices might fail.
Enter the brave new world of network function virtualization. Most network appliance vendors already offer their solution in virtualized format. You can deploy those virtual machines on any hardware (although I would recommend using high-quality servers to deploy mission-critical infrastructure services), move them around at will, and automatically restart them if they or the underlying hardware fails.
Even more, you can easily move a virtual appliance to a different data center together with the associated application workload, making disaster recovery process way simpler than it was in the days of physical appliances.
Does it work?
Short answer: Yes. Will a firewall or load balancer implemented in virtual machine format give you the same performance as a physical appliance? It depends. To learn more, make sure you drop by my presentation on What NFV Means to the Enterprise on April 30 during Interop Las Vegas, and if you want to know more about other enterprise benefits of NFV, join the half-day workshop Simplifying Application Workload Migration Between Data Centers on Tuesday, April 28.
Finally, there are other interesting uses for NFV in enterprise environment -- you can use it to simplify remote site setups. Interested? You’ll learn more on April 30.