Intel's Data Center Plan: More Than Chips

The chip giant aims to shape future data center designs for everything from rack-level systems to software defined networks and servers.

Kurt Marko

July 26, 2013

7 Min Read
Network Computing logo

Memo to Wall Street: Intel begs you to forget about the shrinking PC market; its stock price can't take much more of a beating after underperforming the Dow by almost 9% over the past month. Instead, please think of the former Bunny People as the catalyst behind a monumental data center transformation that will enable infrastructure of massive scale and capable of rapidly delivering new services using software-controlled cloud resources, communicating with apps on devices as varied as smartphones and aircraft engines.

That was the message at Intel's Reimagine the Data Center event, held this week in San Francisco. The chip giant brought out executives from its Datacenter and Connected Systems Group to spend a day with a couple hundred financial analysts, IT consultants and technology journalists and drill home its strategy for re-architecting carrier, service provider and enterprise infrastructure.

Considered holistically, the strategy is both grand in scope and surprising in context. Grand in that Intel is no longer content just talking about CPUs, motherboards and process technology, but wants a hand in developing rack-scale integrated systems, software defined networks and storage as a service. Surprising in that this comes from a company that still derives two-thirds of its revenue and nearly 70% of its earnings from PC processors and chipsets. Server chips make up a mere fifth of its business, while embedded devices, which includes everything mobile processors to network switching silicon, account for a negligible 10%.

Jason Waxman, vice president and general manager of Intel's Cloud Platforms Group and the man in charge of its System on a Chip (SoC) and systems roadmaps, provided the most technical meat at the event. He pointed out that new devices and applications have created increasing diversity in cloud workloads, from big data and HPC applications like predictive analytics to voice and gesture UIs, video content search and delivery, small cell base stations, NAS filers and UTM appliances with quotidian CPU, memory and I/O needs.

Application diversity drives Intel's strategy of building SoCs, mixing and matching processor cores, various hardware accelerators and I/O subsystems that are optimized for different workloads. But the SoCs aren't limited to standard parts like the Avoton, Rangely and Broadwell SoCs I wrote about in an earlier column; they include custom silicon tailored to a customer's specific needs or perhaps even integrating another company's technology into a multi-chip module.

"We can put the right features into the right products," Waxman said. For example, Intel built a custom server CPU for eBay that can change its clock frequency by up to 50%. Waxman said this allows the firm to tailor its compute capacity and power utilization to extreme swings in seasonal resource demand customer demand.

[SoC architectures could transform network equipment. Find out how in Cavium SoCs Promise Fast, Cheap IT Hardware."]

Moving from chips to systems, Intel shared specifics about its rack-scale systems strategy, which was first announced at the Beijing IDF. Working with the Open Compute project, today's design focuses on physical aggregation by sharing power, cooling and system management across a chassis, Waxman said.

However, the next phase will incorporate a shared network fabric using silicon photonics integrated onto each server and optical interconnects between chassis. This solves a major problem with high-density systems, whether 1U rackmounts or blades: namely, upgrading servers and storage without ripping out the network, which Waxman notes is typically on a much slower technology refresh cycle. Eric Dahlen, a principal engineer for cloud platforms, said Intel's goal for rack-scale systems is 50% more servers per rack within the same power budget.

Intel recently demonstrated a 100 Gbps silicon photonics module at IDF Beijing and Waxman followed up by showing off two reference server chassis using a shared mezzanine board with a switch chip for internal interconnect and photonics module for external networking. One of the 2U 21-inch, Open Compute standard chassis contained three, dual socket Xeon servers, each with 8 DIMM sockets; the other was a 30-server box, each supporting a single Atom Avoton CPU, and two SO-DIMMs.

Waxman said future generations of rack-scale systems will use the optical network backplane to connect pools of compute, memory (i.e. high-speed system memory shared among compute nodes) and storage chassis within a single rack. Dahlen said the goal of integrated fabrics is to eliminate network bottlenecks between high-density servers while reducing the number of cables in a rack by a factor of three.

He noted that the integrated photonics will be protocol agnostic, not Light Peak, aka Thunderbolt. He expects most implementations will use Ethernet, although some may opt for PCI to a separate ToR switch. Intel's reference design used just a basic Ethernet switch in the mezzannine backplane, not a more sophisticated 2D or 3D torus fabric like those found in Moonshot or SeaMicro's box. When asked, Dahlen said Intel didn't think most applications would need or easily use such fabrics, although he admitted they could be valuable for HPC workloads.

NEXT: Intel's Open Network Platform

For networks, Intel preached the gospel of SDN, OpenFlow and NFV (using Vyatta as an example) and showed a reference server chassis, the Intel Open Network Platform, for OpenFlow switches and controllers.

Essentially a 2U rack-mount server optimized for network applications, it uses Xeon processors, the Fulcrum-acquired FM6700 switch silicon, and Intel's 89xx communications chipset, all controlled by the Wind River embedded OS running an OpenFlow software stack, complete with Quantum plugin. The platform already has one design win, from Quanta, with more on the way.

Intel had few concrete details to share around storage and big data, although it did reveal impressive results on a data sorting benchmark using the Intel-optimized Hadoop distribution (yes, it's participating in Apache projects) paired with an E5-series Xeon with 10 GbE NICs and using SSDs that cut the time to sort 1TB of data from more than four hours to seven minutes.

Of course, this is still Intel, so there's was plenty of news about processor roadmaps, new SoCs optimized for everything from microservers and storage arrays to network appliances and HPC grids. But the takeaway after a day of briefings is that Intel wants a greater hand in defining how its components are used within hyperscale microservers, network switches, SDN controllers, HPC appliances using MIC (many integrated cores, aka Xeon Phi), and storage arrays -- that is, devices at every level of the data center technology stack. Furthermore, Intel wants to exert more influence over, and contribute to, application architectures and technology to ensure that they run best on Intel hardware, obviously in hopes that software performance will drive hardware sales.

It's an extremely aggressive agenda and one Intel isn't hubristic enough to tackle on its own, hence its participation in so many open source projects and industry consortia, including Open Compute, OpenFlow, OVS, OpenStack, OpenDaylight.

It's clear that Intel sees both an opportunity and a threat as the data center makes a generational change to cloud-like, massively distributed, software controlled infrastructure, and it doesn't want to miss this one like it did the mobile client transition. But we're still early in this cycle. I am encouraged by Intel's direction, however the company has plenty of powerful competitors such as Cisco and EMC pushing their own agendas in areas where Intel has never been a force, so it's unlikely Intel's dream will play out exactly as scripted.

Still, I think the company is pointed in the right direction and seems willing to make major changes that potentially undermine some of its cash cow businesses like Xeon CPUs and chipsets to ensure long term success in the data center. And if it scrapes a few elbows with other big IT vendors, so much the better for IT customers.

Full disclosure: The event was sponsored by Intel and the vendor paid for all travel and accommodations.

About the Author(s)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights