IPU Chip Offloads Networking and Some Security Tasks from CPUs
Infrastructure services such as virtual switching, security, and storage can consume a significant number of CPU cycles. Infrastructure Processing Units (IPUs) accelerate network infrastructure, freeing up CPU cores for improved application performance.
October 11, 2022
Intel and Google Cloud announced that they have launched a co-designed chip, code-named Mount Evans, that can make data centers more secure and efficient. The chip takes over the work of packaging data for networking from the central processing units (CPU) that do the main computing. It also offers better security between different apps and users that may be sharing CPUs, which is particularly important in the cloud. The new chip is being called an infrastructure processing unit (IPU).
The chip builds on expertise and knowledge gained when developing multiple generations of Intel FPGA SmartNICs to deliver high performance and security under real workloads. The new chip’s functions are similar to those performed by what has been traditionally called Data Processing Units (DPUs). These compute devices move, process, secure, and manage data as it travels or is at rest. The DPU market is often associated with companies like NVIDIA and Marvell. Mount Evans serves a comparable role in handling many networking and security tasks that are common in the data centers of large enterprises and hyperscalers.
Enabling modern app requirements
Many organizations have adopted cloud-native methodologies where monolithic applications are replaced with composable elements made available as microservices and via APIs. How does this change things, and where does an IPU like Mount Evans come in?
In the days of monolithic and server-centric apps, systems are designed for use by a single entity, the enterprise itself. Cloud-native apps radically change this. In a recent Intel White Paper, Guido Appenzeller, CTO of Intel’s Data Platforms Group, gave an easy-to-understand analogy. He noted the difference is like the role and location of a kitchen in a home versus a hotel. “In a home, it is convenient to have the kitchen close to the living room. However, in a hotel, the kitchen, where the food is prepared, and the dining room, where the guests eat, are clearly separated.”
Take that analogy back to the way cloud-native applications run and are used. The area where the apps are run and where they are used (dare I say consumed?) are separate. In this model, significant amounts of server resources are needed to carry out tasks such as network and storage functions, security, and managing vast amounts of network traffic.
That’s where the Mount Evans IPU comes in. According to Intel’s definition, an IPU is an advanced networking device with hardened accelerators and Ethernet connectivity. It is used to accelerate and manage infrastructure functions using tightly coupled, dedicated, programmable cores. As such, an IPU offloads all infrastructure-related tasks and provides an extra layer of security by serving as a control point for the system running infrastructure applications.
To that point about security, there are often hundreds of cores on a chip, and sometimes information can bleed between them. The E2000 creates secure routes to each core to prevent such a scenario.
These issues are especially important in highly virtualized and multi-tenant data centers, which are quite common in both the enterprise and for cloud providers. The main issue here is that tasks like setting up the virtual machines and getting data to the right place are essentially overhead costs. The Mount Evans chip separates those tasks from the main computing tasks and speeds them up. Doing so also helps ensure the safety of those functions against hackers and adds flexibility to the data center.
"We see this as strategically vital. It's an extremely important area for us and for the data center," said Nick McKeown, senior vice president of the network and edge group at Intel, in an interview with Reuters.
Implications for the modern data center
Enterprises and cloud providers are moving to faster and faster Ethernet speeds to network all the elements of the data centers together. The net result is that there is an unprecedented volume of network traffic with an associated exponential increase in the number of packets transferred per second. That puts a huge strain on traditional Network Interface Cards (NICs).
Compounding matters, software-defined networking (SDN), and increasingly sophisticated management software combine to drain precious compute resources. Intel estimates that in some highly virtualized environments, networking can consume 30 percent of a host CPU’s cycles.
To address that issue, IPUs combine hardware-based data paths with processor cores. That allows infrastructure processing at the speed of the hardware (vs. doing these tasks in software). Use cases in data centers include:
Accelerated networking, which offloads virtual switch functionality to hardware.
Accelerated storage, which moves the storage stack from the host application processor to the IPU.
Accelerated security, which offloads encryption/decryption, compression, and other security CPU-intensive tasks.
Infrastructure processing, which offloads hypervisor services management functions to the IPU.
The cumulative effect of using IPUs is that data center operators can increase performance and optimize the use of compute resources.
Related articles:
About the Author
You May Also Like