Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Unprecedented Change Transforming Data Centers

We are in an era of transformational change in the data center industry and this is new. Historically, these dedicated buildings and rooms -- originally designed to house mainframe computers -- saw little change for more than 40 years and were highly regulated by the US and other governments.

As Winston Churchill once said, “There is nothing wrong with change, if it is in the right direction.” And change is upon us!

Over the past few years, several strong forces have built up momentum directly affecting data center design and architecture in 2017 and beyond. The first is cloud computing: internet giants were not satisfied with legacy data centers' speed of deployment, high cost and ability to scale, and started their own designs. The second is the Internet of Things (IoT), which is driving data centers to deal with massive amounts of data and moving computational power and delivery of high bandwidth content closer to the user, or the network edge.

The growth of hyperscale

Let’s first discuss how the internet giants approach the challenges of building massive cloud hyperscale data centers. First of all, the competitiveness in the market dictates that capacity must come online extremely fast: 10-20 MW projects from concept to operational in less than a year. This is a massive change from the legacy process that took 10x longer to implement. The need for speed to market has resulted in best-in-class IT players like Hewlett-Packard Enterprise and now Cisco to abandon their cloud initiatives as AWS, Azure and others expand.

So how are the winners moving so fast? They’re starting at the facility level with prefabricated power solutions, including switchgear and back up that are skid mounted and built in parallel with the actual data center facility. The skids are then shipped to the site when it’s ready. Additionally, they are designing their own IT equipment -- servers, storage, racks, monitoring--and building in low cost areas to save money and time.

This bare-metal IT equipment also is built in parallel to the facility and stacked in logical building blocks -- usually racks -- and then shipped to the site and quickly rolled into place. The IT equipment is designed to operate commensurate with the ambient temperature of the site location, which enables many of these facilities to essentially just blow outside air through the facility as the mechanism for cooling. The result is far more efficient and less complex than traditional precision cooling.

In order to maximize speed of communication and reduce latency, the hyperscale players are doing two more things: increasing speed of the network fabric  and moving capacity closer to the users. Network fabric speed means how fast the data comes off of the network  -- usually the internet and maybe a long-haul transmission -- to the core switch in the data, to combinations of aggregation switches, and the connection to the server itself,  then back to the user. These connections are accomplished through combinations of different network connectivity levels utilizing 10/25/50/100 gigabit-per-second speeds. It was not long ago that 1G was considered amazingly fast!

However, moving data though switches does slow it down, and low latency applications like online gaming and Microsoft 365 require less latency. This is forcing internet giants to replicate their cloud computing in regional or urban areas in a hybrid-cloud architecture. Even though these huge companies can build facilities very fast, there are always regulatory and environmental issues that dramatically slow down urban deployments. Luckily, many colocation providers have gone through the long process and have facilities with excess capacity in urban areas where they are moving.

IoT and the edge

When most people think of IoT, they think of wearable devices. These “things” are part of IoT but are usually not latency-critical and do not utilize a lot of bandwidth. However, there are emerging applications in IoT that will require dramatically reduced latency and are located farther out on the edge. This includes video streaming and facial recognition at stadiums, airports, and museums, smart city, cloud-assisted driving, remote control vehicles, and eventually autonomous vehicles.

These applications, as well as virtual and augmented reality, require very localized computing and storage. The most efficient way to deploy this computational and storage capacity is through small micro data centers, two IT racks or smaller, that communicate through WiFi, 4G, 5G or any combination. We are already seeing the start of this trend, and as applications build momentum, we will see deployments on a massive scale. 

We are in a period of data center transformation as massive capacity comes online in core cloud computing hyperscale data centers, regional urban edge data centers, and localized edge micro data centers. These applications will enable a new area of automation and lifestyle changes for many people around the world -- change that's in the right direction.

Steven Carlini is the Sr. Director, Data Center Global Solutions for Schneider Electric. Steven is responsible for developing integrated solutions for Schneider Electric’s data center segment including enterprise and cloud data centers. A frequent speaker at industry conferences and forums, Steven is an expert on the foundation layer of data centers which include power and power distribution, cooling, rack systems, physical security and DCIM Management solutions that improve availability and maximize performance. Steven holds a BS in Electrical Engineering from the University of Oklahoma, and an MBA in International Business from the CT Bauer School at the University of Houston.