Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Data Center Lessons From The Super 7

"The Magnificent Seven" is one of my favorite old Hollywood westerns, so I wasn’t surprised to see a a remake on the horizon. The title reminds me  of the seven hyperscale companies that have been called the Super 7: Amazon, Facebook, Google, Microsoft, Baidu, Alibaba, and Tencent. What is it that makes these seven companies so super? And most importantly, what can other companies learn from them to make their own data centers work better?

Make it open

The Super 7 are really BIG. In 2008, just three companies -- Dell, HPE, and IBM -- accounted for 75% of the servers in the world. Fast forward just four years and eight companies made up 75% of the server market, including Google,  the first of the magnificent seven to build its own servers. Today, all of the hyperscale companies build their own servers or have customized servers built for them by OEM/ODM partners. Several of them are even designing and building their own “white-box” Ethernet switches and integrating them with their own specialized network operating systems.

While not every company may need to build their own servers or even have customized servers built for them, data center hardware can be a key competitive differentiator for any business. Traditional networking and storage platforms are black boxes with software and hardware delivered as a vendor-defined solution. These closed boxes not only cost more, but stifle innovation and create vendor lock-in and limit customization. By contrast the white-box switches and server platforms originally developed for the Super 7 are now offered by vendors to any business, making customization and optimization easier than ever.

Software-defined everything

The Super 7 have embraced a software-defined everything (SDX) architecture. That means that instead of buying purpose-built compute and storage appliances, they run all their workloads on industry standard servers and use software to create tightly coupled compute clusters and fault tolerant storage systems. This “build-it-yourself” mentality has allowed these giants to streamline their infrastructure by eliminating costly Fibre Channel storage area networks  and running everything on a single, converged network environment.

Software-defined storage and networking are no longer emerging technologies. They're being deployed not only the big players – the innovators – but by enterprise early adopters as well. It won’t be too much longer before the majority comes to realize the incredible control, flexibility and savings a software-defined architecture can provide. Software-defined architecture truly makes data center differentiation a real option for even the most humble of organizations.

Agility is a virtue

Being nimble is critical, and the Super 7 are certainly nimble – they adopt the latest technologies and use automation to cope with the massive scale of their data centers. To ensure the highest levels of efficiency, they upgrade their servers every three years versus the four- to five-year upgrade cycle more typical of enterprise environments. They are able to customize these servers and storage platforms to their exact needs and, because they use SDX architectures, can eliminate costly management and redundancy elements. Instead, they are able to use software to achieve high availability at a rack or even data center level.

It may not be financially feasible for smaller organizations to emulate the speed and frequency in which the Super 7 adopt new technologies or upgrade their servers, but organizations of all sizes can still keep a close eye on which technologies these giants are deploying and which technologies appear to be proving to be most successful. Data center operators will benefit as the hyperscale companies test the waters and then determine which technology makes the most sense for their own organization.

Moreover, by deploying tried-and-true software-defined solutions now, many other upgrades will likely be less costly and easier to deploy down the road. Many organizations get caught up on initial sticker price, but it is really the total cost of ownership that should be considered when making IT decisions. Sometimes this can mean purchasing technology that pushes the boundaries a bit now, but is better equipped to address your future needs.

The long play

Finally, the tech giants use the most advanced networking equipment because they’ve realized it's the only way to get the most out of their servers and storage. They've migrated to 25, 40, 50 and even 100 Gigabit Ethernet to be able to run the maximum number of workloads on their compute clusters. In the case of the cloud vendors, this profoundly impacts their bottom line because, after all, they are selling virtual machines and application workloads to their customers. So the more efficiently they can use their compute and storage infrastructure, the more virtual machines and workloads they can host, and the more they have to sell to customers.

In adopting 25, 50 and 100 Gigabit Ethernet, the Super 7 set the pace for forward-thinking data center architects. These innovators understand that merely upgrading to the next available version or speed isn’t always the best strategy in the long run because all too often that technology becomes outdated before the deployment is even complete. Organizations of all sizes must anticipate future needs every time an upgrade is under consideration. There must be a balance between immediate cost, performance, total cost and most important, future-proofing with every technology purchase.

A blueprint

So like the brave villagers in the movie, who were the real winners, the rest of the industry will reap the benefits of the pioneering efforts of these tech giants. The availability of open platforms has spawned a whole new ecosystem of open networking technologies such as Cumulus Networks, OpenSwitch, and Microsoft SONIC. Furthermore, the rapid adoption of 25, 50 and 100 GbE  by these companies is actually solving the traditional price/volume chicken/egg issue with new technology adoption.

Best of all, they’ve created a blueprint for others to follow and clearly demonstrated that the path to achieve total infrastructure efficiency requires state-of-the art servers and storage combined with high-performance Ethernet networking.