Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

NFV Adds Agility To Networks

Centuries ago, Sumerians used clay tokens to keep track of goods such as seeds. Data has come a long way over the centuries. With easy access to PCs and smartphones, consumers created and consumed 1.9 trillion gigabytes in 2012. Digitization has fueled the growth rate of data compared to the analog era -- in fact some say that 90% of all the data in the world today has been created in the past few years.

Data generation and consumption is no longer limited to people communicating with each other via voice, text and video, either. Machines communicating with other machines and everything connecting to the Internet further compounds the growth rate of data resulting in big data. Customers' insatiable demand for anytime, anywhere access to data fundamentally shifts the way businesses are operated, propelling the growth of cloud.

Virtualization and utilization

To meet customer demands, network efficiency and economies of scale must be improved. Historically, enterprises bought physical storage, compute and networking technology to put together a data center. Servers were dedicated to different divisions within an organization despite limited utilization rates; and as data volumes grew, the need to optimize resource utilization became imperative.

As such, data centers have gone through a significant shift over the last decade in the name of efficiency, improving utilization rates from 10% to 70% by virtualizing storage and compute and optimizing hardware investments and operational expenditures. Now we must turn our attention to optimizing the network. Static networks built to deliver content no longer work efficiently, and so we must join the virtualization journey to improve network utilization and become agile.

Reinventing the wheel

Enterprises started with private cloud. But that burdens them with having to stay informed of the latest hardware and software technology for the data center, and they often find themselves reinventing the wheel. Further, not all enterprises fully benefit from the economies of scale they thought they would achieve by deploying a private cloud. Essentially, enterprises struggle with a double whammy of the lack of economies of scale and spending cycles on efforts such as learning new technologies that are not core to their primary business.

The end goal of an enterprise is to offer superior services rapidly to its customers by leveraging core strengths. Instead of being distracted with IT, the task of managing the private cloud can be outsourced to operators. This frees up resources to focus on what is critical to the company -- the customer.

Operators who manage multiple private data centers can cross-pollinate best ideas from different data centers and improve productivity overall. While this somewhat addresses the efficiency issue for enterprises, capital expenditures still remain high. Therefore, many enterprises are now turning to service providers to take care of their data center needs in the form of either hybrid or public cloud. This shifts the focus from capital expenditures to operational expenditures and from self-management to managed services.

The need for hybrid and public clouds, hyperscale datacenters and multi-tenant environments is growing. Typical networks have discrete management interfaces and numerous operating systems, requiring staff to have specialized knowledge of each vendor's products. Network utilization is usually in the 20-30% range.

A series of connected elements is only as good as the slowest link in the chain -- with both storage and compute virtualized and optimized, the network has become the choking point of the data center. And with bandwidth and application demand increasing, the network must become more flexible and agile while becoming easier to manage. This is only economically feasible if the network is virtualized.

Network functions virtualization (NFV) to the rescue

Traditionally, networking has been about racking and stacking purpose-built hardware. But with virtualized Layer 4-7 network functions (VNFs), services can be instantiated on commercial-off-the-shelf (COTS) hardware to provide several benefits, including:

  • When the demand for the first function diminishes and the demand for a second function rises, the same hardware can be repurposed to host the second function. For instance, when volumetric DDoS attacks slow down, the underlying hardware can now run additional WAN optimization virtual machines. It is even possible to run multiple virtual appliances on the same hardware simultaneously. This simplifies capacity planning.
  • Instead of spending up to 60 days to order an appliance, configure and install it, operators can simply instantiate a new virtual appliance at a moment's notice to deal with growing demand. This level of flexibility and elasticity results in business agility.

Several use cases can be addressed with NFV, ranging from data center interconnects to mobility to security. However, with the growing variety of applications that are virtualized, the number of virtual instances per tenant, and the number of tenants, managing the virtualized network is a challenge. This wide range of applications also requires careful scrutiny for optimization to avoid the pitfalls encountered in the traditional data center space. Managing the end-to-end lifecycle of virtual network functions along with on-the-fly license generation can consume a lot of time if not implemented properly. 

In effect, NFV and SDN are just tools to enable better communications between people and machines. Users will continue to demand access to information anytime, anywhere and continue to produce endless bits and bytes that our current networks can no longer support. As big data evolves into hyperscale data, it is imperative to have a flexible, scalable, and reliable high-performance infrastructure that allows adding new services seamlessly.