Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The Darwinian Accelerator Driving White-Box to Brite-Box in the Enterprise

network-2402637_1280.jpg

Data Center
(Image: Pixabay)

Open, standards-based networking−a disaggregated Linux-based network operating system (NOS) running atop bare-metal “white-box” ODM switches from multiple vendors−is, at this point, conceptually nothing new.  A number of Silicon Valley start-ups began selling these solutions to the market back in early 2012.  Individually, each offering looked to advance the cause of modernizing and innovating IP networking in one of the three main segments of the overall IP marketplace−either the telco, data center, or large enterprise vertical.

As it turns out, not only are the software feature-set requirements needed to service these three verticals wildly divergent, but the logical and physical architectural realities of these network environments turned out to have quite an impact on the timing of overall open networking adoption. For example, the first vertical to fully embrace the white box open networking model was that of large data centers, aka web-scale networking or, often, multi-cloud networking. In this application of the technology, thousands of white box switches can be deployed at a very small number of locations where high-end Tier 3 resources are plentiful. So potential concerns about things like supply chains and global hardware service simply weren’t issues at all for the data center market where both switching hardware and IT talent are physically concentrated in the same places.

NOS vendors in all three open networking verticals−telco, data center and large enterprise alike−initially entered their target markets via classic proof-of-concept (PoC) white box trials and deployments. This was done to validate economic and business value propositions and show commercial robustness. The data center open networking vertical was the first to take off. In fact, open white-box networking was so successful here that all of the web-scale companies themselves, such as Amazon and Google, essentially rolled their own white box switches for their data centers to displace classic legacy installations from the Ciscos and Aristas of the world.

Networking takes a different approach

But something quite different is afoot with the second wave of open networking. That wave involves disaggregated Linux NOS software running enterprise features on white box switches with open APIs. Basically, the impact of scale in large, Fortune 500-class open networking deployments is far more layered, and even more mission-critical, in the enterprise than it was for data center open networking solutions.

Here we’re not talking about scale in the sense of configuring thousands of switches and automating the management of the network. Those are ubiquitous requirements for open networking in any of the three major market segments. No. We’re talking about the differences in critical business considerations involved in deploying 5,000 or 10,000 or 20,000 switches at a small handful of data center locations versus deploying the same number of devices in hundreds−or even thousands−of disparate locations across the globe. For the operators of large enterprises, concerns about things like dependable supply chains and global service and support suddenly shoot right to the top of their requirements lists.

When the open networking data center market moved from its PoC phase to commercial deployment, the choice of hardware manufacturer barely registered as a consideration. Spares could be easily stockpiled locally, and there was plenty of resident, local IT talent to install and maintain the open switching white-box hardware infrastructure. 

Conversely, when open networking PoCs and production trials for modernizing large enterprise networks finally started reaching the same point last year−using hardware from the exact same white box manufacturers, such as Edge-Core and Delta, used in the data center PoCs−the procurement arms of most of these companies immediately took notice and demanded that they now required switches from a known and trustworthy source of global hardware support and sparing before large-scale deployments would be funded. In essence, they began to mandate the use of commercially branded white box switching hardware, aka “brite-box,” for large-scale open networking deployments in their companies.

This is where the beauty of the open networking model makes itself known once again. The two leading brite-box switching vendors, Dell EMC and HP, both “brand” identical white box hardware used in the enterprise networking PoCs and trials. And they sell it under their names with backing from their extensive global service and support organizations. The Edge-Core 7618 and Dell EMC Z9264, for example, are identical 64-port 100G switches with open APIs that allow an open standards-based Linux NOS to run on them with a full enterprise feature set.

So, the Darwinian Accelerator driving the acceptance and deployment of open network solutions in large enterprises turns out to be scale, albeit with a “mutation” that favors the brite-box over the white-box trait.