Building A Data-Centric Data Center

The final installment of this series examines the key elements required to accomplish rack-level optimization for a new data center architecture.

Kevin Deierling

July 29, 2015

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

In the previous two blog posts in this series, I talked about system administrators losing sight of the data in data centers architectures and focusing only on compute horsepower. With the advent of cloud and big data analytics, this has changed with a renewed focus on data.

In this data-centric data center architecture, the compute element is no longer at the center of the server, but rather becomes a modular peripheral just like any other device. Furthermore, in this design, the server itself is no longer the unit of optimization, but rather rack and even data center optimization becomes the focus.

This new data center architecture puts tremendous pressure on the characteristics and capabilities of the network. In fact, the networking element becomes critical, requiring:

1. Disaggregation of server resources

  • Flexible multi-host shared NIC

  • Allows sharing of CPU, storage, memory, and I/O resources

  • Flexibility to scale each element independently

  • Reduced cost, power, and area

2.  Efficient data movement

  • Network virtualization, flow steering, & acceleration of overlay networks

  • RDMA for server and storage data transport

3.  High-speed data connectivity

  • Cost-effective server and storage physical connectivity

  • Copper connectivity within the rack

  • Standards-based silicon photonics enabling long-reach data connectivity  

Figure 1:

Performing data-centric, rack-level optimization results in an architecture that provides four major benefits:

  • Flexibility: Mix and match computational GPU and CPU types as needed

  • Ability to upgrade to newer compute generations without a forklift server upgrade

  • Network efficiency: Sophisticated RDMA transport and physical network becomes a shared resource

  • Capex and opex reductions: Fewer NICs, connectors and cables and lower power reduces both upfront and operating costs

Facebook Yosemite

A good example of a platform that illustrates these concepts is the new multi-host Yosemite introduced by Facebook in March at the Open Compute Project (OCP) Summit in San Jose. This multi-host architecture allows a single network resource to support multiple processing resources with complete software transparency. Yosemite connects four CPUs to a shared multi-host NIC capable of running at 100 Gbps Ethernet links over a standard QSFP connector and cables. 

Figure 2:

Facebook's Yosemite platform includes all the core requirements of a data-centric data center architecture, including a network able to provide data efficiently to the individual processing units. This new rack-level architecture with modular computing elements provides the flexibility to mix and match processor types to align with application requirements.

New platform architectures such as Yosemite enable the evolution of this new data-centric data center, but are not by themselves enough. In addition, new ways of processing the data within nodes and efficiently moving data between nodes is required. This is required as the onslaught of data is not only demanding changes in rack architecture and networking technology, but also how data is processed.

Instead of a one-size-fits-all processor, the variations in the types of data processing performed means that different types of processors become optimal depending on the task. So for advanced machine learning and image recognition tasks, a massively parallel GPU (graphics processing unit) might be ideal. Other more conventional workloads might call for a high power X86 or OpenPower CPU. For other web and application processing tasks, power- and cost-efficient ARM-based processors may fit the bill.

Finally, very high-speed data connectivity is required based on industry standard form factors that can leverage existing installed fiber plant and span large distances to cost-effectively connect all of the elements within a disaggregated data center. The  OpenOptics silicon photonic wavelength division multiplexing (WDM) specification, contributed to the OCP in March, provides the answer -- an open, multi-vendor, cost effective means to connect rack-level components. The OpenOptics specification leverages advanced silicon photonics technology capable of scaling to terabits per second of throughput carried on a single fiber and spanning distances up to 2Km.

Adapting to change

The ever increasing number of users, applications, and devices requires a new data-centric architecture capable of moving and processing vast amounts of data quickly and efficiently. Businesses that don’t understand the changes required and continue with a compute-centric approach will find themselves spending more money on compute hardware, but never able to keep up with the flood of data.

On the other hand, businesses that understand and embrace the efficient networking technologies this new data-centric data center architecture requires will find themselves with a competitive advantage -- able to process and analyze more data in real time and convert to actionable business intelligence. These businesses will thrive at the expense of those who are unable or unwilling to change.

About the Author

Kevin Deierling

Kevin Deierling is Senior Vice President of NVIDIA Networking. He joined NVIDIA when it acquired Mellanox. There he served as vice president of marketing responsible for enterprise, cloud, web 2, storage, and big data solutions. Previously, Deierling served in various technical and business management roles including chief architect at Silver Spring Networks and vice president of marketing and business development at Spans Logic. Deierling has contributed to multiple technology standards through organization including the InfiniBand Trade Association and PCI Industrial Manufacturing Group. He has more than 25 patents in the areas of security, wireless communications, error correction, video compression, and DNA sequencing and was a contributing author of a text on BiCMOS design. Deierling holds a BA in solid state physics from UC Berkeley.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights