How DPUs, IPUs, and CXL Can Improve Data Center Power Efficiency

DPU, IPU, and CXL technologies that offload switching and networking tasks from server CPUs have the potential to significantly improve data center power efficiency.

How DPUs, IPUs, and CXL Can Improve Data Center Power Efficiency
(Credit: Rawf8 / Alamy Stock Photo)

Data Processing Units (DPUs), Infrastructure Processing Units (IPUs), and Compute Express Link (CXL) technologies, which offload switching and networking tasks from server CPUs, have the potential to significantly improve the data center power efficiency. In fact, the National Renewable Energy Laboratory (NREL) believes enterprises that use such techniques and focus on power reduction can realize about a 33 percent power efficiency improvement.

Improving power efficiency with new networking technologies

In the last year, several new technologies have emerged that take a new look at data center power consumption. While taking different approaches, the solutions all seek to improve energy efficiency by offloading switching and networking chores from CPUs, just as GPUs and hardware-based encryption reduce the load on CPUs (and thus drive down overall power usage). Here are some of the major developments to watch:

What are Data Processing Units (DPUs)?

The Data Processing Unit (DPU) is a relatively new technology that offloads processing-intensive tasks from the CPU onto a separate card in the server. Essentially, a DPU is a mini onboard server that is highly optimized for network, storage, and management tasks. (A general CPU on board a server was never designed for these types of intensive data center workloads and can often bog down a server.)

What impact can DPUs have? “The use of hardware acceleration in a DPU to offload processing-intensive tasks can greatly reduce power use, resulting in more efficient or, in some cases, fewer servers, a more efficient data center, and significant cost savings from reduced electricity consumption and reduced cooling loads,” said Zeus Kerravala, founder and principal analyst with ZK Research.

How much power can DPUs save? A report by NVIDIA, which offers the NVIDIA BlueField DPU, estimates that offloading network, storage, and management tasks can reduce server power consumption by up to 30 percent. Furthermore, the report noted that the power savings increases as server load increases and can save $5.0 million in electricity costs for a large data center with 10,000 servers over the 3-year lifespan of the servers. There would be additional savings in cooling, power delivery, rack space, and server capital costs.

What are Infrastructure Processing Units (IPUs)?

Infrastructure services such as virtual switching, security, and storage can consume a significant number of CPU cycles. Infrastructure Processing Units (IPUs) accelerate these tasks, freeing up CPU cores for improved application performance and reduced power consumption.

Last year, Intel and Google Cloud launched a co-designed chip, code-named Mount Evans, to make data centers more secure and efficient. The chip takes over the work of packaging data for networking from CPUs. It also offers better security between different apps and users that may be sharing CPUs.

According to Intel’s definition, an IPU is an advanced networking device with hardened accelerators and Ethernet connectivity. It accelerates and manages infrastructure functions using tightly coupled, dedicated, programmable cores. IPU is particularly effective in modern compute environments that use software-defined networking (SDN) and increasingly sophisticated management software. These solutions combine to drain compute resources. Intel estimates that networking can consume 30 percent of a host CPU’s cycles in some highly virtualized environments.

Compute Express Link (CXL) is an open interconnect standard for enabling efficient, coherent memory accesses between a host, such as a processor, and a device, such as a hardware accelerator or Smart NIC. The standard aims to tackle what is known as the von Neumann bottleneck, in which the compute speed is limited to the rate at which the CPU can retrieve instructions and data from the storage's memory.

CXL solves this problem in several ways. It takes a new approach to memory access and sharing between multiple computing nodes. It allows memory and accelerators to become disaggregated, enabling data centers to be fully software-defined.

Some memory devices in a pool can allow memory to be shared across many hosts, which opens new opportunities and possibilities for applications to read, modify, and write data in place without moving data or passing messages between nodes over the network. That capacity for data to be mapped and used by multiple hosts can improve infrastructure resource utilization.

How important is CXL? “CXL technology could significantly influence the future server architectures,” says Aaron Lewis, Analyst in Omdia’s Cloud and Data Center Research Practice. Specifically, CXL can reduce the memory cost in servers while meeting capacity and bandwidth requirements.

Lewis also noted that a significant share of the motherboard area gets used for memory. With CXL memory disaggregation, memory resources can be treated like storage drives or PCIe cards in physical form factor. That could make server designs more compute-dense and limited primarily by thermal factors rather than a lack of motherboard real estate.

Why DPUs, IPUs, and CXL technologies matter

For decades, enterprises have been concerned with the growing power requirements to run their data centers and IT operations. As compute and storage requirements grew over time, so too did energy consumption. Servers with more powerful CPUs used more electricity outright. And the heat generated by the faster processors required more and more cooling, thus compounding the electric load.

To address these issues, various power efficiency solutions have been tried to address the issue by offloading some computational and security tasks to a co-processor (a GPU or hardware-based encryptor, for example) optimized for specific tasks.

Besides the use of GPUs, enterprises used virtualization and adopted energy-efficient practices to significantly improve data center power consumption. As a result, Power Usage Effectiveness (PUE), a standard metric that describes how efficiently a computer data center uses energy, dropped. (That’s a good thing.) It went from roughly 2.5, on average, in 2007 to 1.98 in 2011. That drop was achieved using power-optimizing techniques, including maximizing workloads with virtualization, improving cooling, and other initiatives.

Unfortunately, these methods used to reduce data center power efficiency have seen little improvement in a decade, as PUE ratios have seen minimal change. Studies show a wide range of PUE values for data centers, but the overall average today is around 1.8.

Recently, attention has shifted to driving down the power requirements for switching and networking chores. The National Renewable Energy Laboratory (NREL) believes enterprises that use such techniques and focus on power reduction can realize about a 33 percent power efficiency improvement.

Related articles:

About the Author

Salvatore Salamone, Managing Editor, Network Computing

Salvatore Salamone is the managing editor of Network Computing. He has worked as a writer and editor covering business, technology, and science. He has written three business technology books and served as an editor at IT industry publications including Network World, Byte, Bio-IT World, Data Communications, LAN Times, and InternetWeek.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights