Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Intel's Data Center Plan: More Than Chips

Memo to Wall Street: Intel begs you to forget about the shrinking PC market; its stock price can't take much more of a beating after underperforming the Dow by almost 9% over the past month. Instead, please think of the former Bunny People as the catalyst behind a monumental data center transformation that will enable infrastructure of massive scale and capable of rapidly delivering new services using software-controlled cloud resources, communicating with apps on devices as varied as smartphones and aircraft engines.

That was the message at Intel's Reimagine the Data Center event, held this week in San Francisco. The chip giant brought out executives from its Datacenter and Connected Systems Group to spend a day with a couple hundred financial analysts, IT consultants and technology journalists and drill home its strategy for re-architecting carrier, service provider and enterprise infrastructure.

Considered holistically, the strategy is both grand in scope and surprising in context. Grand in that Intel is no longer content just talking about CPUs, motherboards and process technology, but wants a hand in developing rack-scale integrated systems, software defined networks and storage as a service. Surprising in that this comes from a company that still derives two-thirds of its revenue and nearly 70% of its earnings from PC processors and chipsets. Server chips make up a mere fifth of its business, while embedded devices, which includes everything mobile processors to network switching silicon, account for a negligible 10%.

Jason Waxman, vice president and general manager of Intel's Cloud Platforms Group and the man in charge of its System on a Chip (SoC) and systems roadmaps, provided the most technical meat at the event. He pointed out that new devices and applications have created increasing diversity in cloud workloads, from big data and HPC applications like predictive analytics to voice and gesture UIs, video content search and delivery, small cell base stations, NAS filers and UTM appliances with quotidian CPU, memory and I/O needs.

Application diversity drives Intel's strategy of building SoCs, mixing and matching processor cores, various hardware accelerators and I/O subsystems that are optimized for different workloads. But the SoCs aren't limited to standard parts like the Avoton, Rangely and Broadwell SoCs I wrote about in an earlier column; they include custom silicon tailored to a customer's specific needs or perhaps even integrating another company's technology into a multi-chip module.

"We can put the right features into the right products," Waxman said. For example, Intel built a custom server CPU for eBay that can change its clock frequency by up to 50%. Waxman said this allows the firm to tailor its compute capacity and power utilization to extreme swings in seasonal resource demand customer demand.

[SoC architectures could transform network equipment. Find out how in Cavium SoCs Promise Fast, Cheap IT Hardware."]

Moving from chips to systems, Intel shared specifics about its rack-scale systems strategy, which was first announced at the Beijing IDF. Working with the Open Compute project, today's design focuses on physical aggregation by sharing power, cooling and system management across a chassis, Waxman said.

However, the next phase will incorporate a shared network fabric using silicon photonics integrated onto each server and optical interconnects between chassis. This solves a major problem with high-density systems, whether 1U rackmounts or blades: namely, upgrading servers and storage without ripping out the network, which Waxman notes is typically on a much slower technology refresh cycle. Eric Dahlen, a principal engineer for cloud platforms, said Intel's goal for rack-scale systems is 50% more servers per rack within the same power budget.

Intel recently demonstrated a 100 Gbps silicon photonics module at IDF Beijing and Waxman followed up by showing off two reference server chassis using a shared mezzanine board with a switch chip for internal interconnect and photonics module for external networking. One of the 2U 21-inch, Open Compute standard chassis contained three, dual socket Xeon servers, each with 8 DIMM sockets; the other was a 30-server box, each supporting a single Atom Avoton CPU, and two SO-DIMMs.

Waxman said future generations of rack-scale systems will use the optical network backplane to connect pools of compute, memory (i.e. high-speed system memory shared among compute nodes) and storage chassis within a single rack. Dahlen said the goal of integrated fabrics is to eliminate network bottlenecks between high-density servers while reducing the number of cables in a rack by a factor of three.

He noted that the integrated photonics will be protocol agnostic, not Light Peak, aka Thunderbolt. He expects most implementations will use Ethernet, although some may opt for PCI to a separate ToR switch. Intel's reference design used just a basic Ethernet switch in the mezzannine backplane, not a more sophisticated 2D or 3D torus fabric like those found in Moonshot or SeaMicro's box. When asked, Dahlen said Intel didn't think most applications would need or easily use such fabrics, although he admitted they could be valuable for HPC workloads.

NEXT: Intel's Open Network Platform

  • 1