Data centers

10:45 AM
Kurt Marko
Kurt Marko
Connect Directly
Repost This

Intel Unveils Plans to Dominate the Data Center

Intel released details for Atom SoCs designed to penetrate all areas of data center infrastructure, from micro servers to storage systems and network devices.

Intel is targeting the data center as an arena where it hopes to change its fortunes. The chip maker has endured rough times lately, including a disappointing earnings report that saw its stock price drop by over 5% in the following two days, an analyst call overshadowed by dirges about a PC market deteriorating faster than anyone anticipated, and the ongoing negative ramifications of being far too late to the mobile device party.

At a press event in San Francisco this week, Intel demonstrated it's no longer content with being a component-level arms merchant to IT equipment manufacturers. Instead, the company wants to be a significant force in driving the system architecture and technology direction of what some are now calling the software-defined data center.

To start, Intel provided new details about its next-generation Atom C2000 product family, commonly known by their Avoton and Rangely code names. Both are server SoCs using the same microarchitecture and process node that Intel will repackage for tablets and smartphones, but will include features that servers require, such as support for up to 64 Gbytes of DDR3 ECC memory, a 64-bit x86 instruction set, virtualization (Intel VT) and integrated Ethernet NICs. The Atom line is Intel's answer to low-power ARM chips, both for servers and mobile devices.

The Atom C2000, which has been sampling to more than 50 OEMs since spring and is promised to be at production scale "soon,"will feature multiple configurations.

At the top end is an eight-core chip that's seven times faster and four times more efficient when measured by performance per watt than its Centeron predecessor, which was just announced in December.

[Sytem on a Chip (SoC) architectures could transform network equipment. Find out how in "Cavium SoCs Promise Fast, Cheap IT Hardware."]

But Intel isn't planning just to segment the new chips, which use the Atom Silvermont microarchitecture and the same 22nm, tri-gate process node the debuted on the Ivy Bridge Core CPUs, into multiple speed grades. It's also introducing a new product family called Rangely. Rangely adds advanced features such as crypto acceleration and packet processing modules from the Fulcrum acquisition that are specifically targeted for network appliances and applications.

With multiple SKUs that vary core counts and speeds for each product, Intel hopes the Atom SoCs will penetrate all areas of data center infrastructure, from micro servers to storage systems and network devices. Intel also updated the roadmap for next-generation Xeon and Atom products, code-named Broadwell and Denverton, based on its forthcoming 14-nm process technology scheduled for 2014 and later. Jason Waxman, VP and general manager of Intel's Cloud Platforms group, said the two product lines will finally be synced on the same process node for future product generations.

The other big news was Intel's announcement that it will build its first big-core Xeon SoC, integrating Broadwell cores, I/O processors and other subsystems using the 14-nm process. Intel released no technical details about the product, but Waxman said its performance and power envelope will slot between the Atom Denvertons and discrete Broadwell Xeons.

The chip maker also added substance to its rack-scale system architecture and data center vision, which I'll describe in a future column. It showed off two reference Open Compute rack-scale systems--one with three dual Xeon motherboards and another with 30 single Atom SoC microservers, each using a mezzanine card for shared I/O and power distribution.

Intel intends to play a leading role in defining next-generation data center architectures, and the company provided enough technical details to demonstrate that it's not just hollow marketing talk.

(Full disclosure, Intel paid for all travel expenses to the event.)

Given Intel's difficulties of late, do you think the company is taking the right steps to meet changing data center demands? Your comments are welcome.

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Lorna Garey
Lorna Garey,
User Rank: Apprentice
7/29/2013 | 2:57:23 PM
re: Intel Unveils Plans to Dominate the Data Center
Kurt, did you see the ad for Moonshot on yesterday's Sunday shows?
User Rank: Apprentice
7/25/2013 | 7:24:04 PM
re: Intel Unveils Plans to Dominate the Data Center
Re: 64GB. You have to remember that these are Atom cores and unlikely to be virtualized (although they can be). Thus, they'll generally be running single workloads where 64GB is more than enough. If you want big cores with lots of memory and many VMs, Broadwell is you chip. They didn't mention EPT, but the current-generation Centerton chips (1200-series) do not support it.

As to your second point; microservers target a different workload than big core machines. The idea behind these hyperscale systems, whether Intel's reference design, HP Moonshot, Calxeda, SeaMicro, is to size the hardware to the workload requirements such that you don't have to virtualize. In this light, VMware is a kludge needed only because we had hardware vastly mismatched to software requirements. Atom and other microservers are designed to correct this mismatch. If your workload needs a lot of memory, I/O and CPU cycles, go Xeon or Xeon Phi (MIC) for parallelizable workloads; if not, go Atom. For something in between, the Broadwell SoC will mix elements of both architectures.
User Rank: Apprentice
7/25/2013 | 2:44:38 PM
re: Intel Unveils Plans to Dominate the Data Center
If they are planning to market the C2000 for servers, then it seems a bit light. Only 64GB per socket isn't enough. All of the servers we implement for our own use and customers have 96 to 128GB per socket. You said "Intel VT", what about EPT? Is that embedded NIC 1G or 10G? There is a huge movement starting for 10G Ethernet. We expect the datacenter core to be dominated by 10G within 3 years, large numbers are already there, we are in our own shop.

If these things aren't sized for virtual hosts then they might as well not be mentioned in the same sentence as servers. Over 50% of servers are already virtualized. Most market analysts are projecting 80% by the end of this year approaching 100% within 3 years. So if they aren't big enough to run VMware or Hyper-V they aren't big enough to be servers.
More Blogs from Commentary
SDN: Waiting For The Trickle-Down Effect
Like server virtualization and 10 Gigabit Ethernet, SDN will eventually become a technology that small and midsized enterprises can use. But it's going to require some new packaging.
IT Certification Exam Success In 4 Steps
There are no shortcuts to obtaining passing scores, but focusing on key fundamentals of proper study and preparation will help you master the art of certification.
VMware's VSAN Benchmarks: Under The Hood
VMware touted flashy numbers in recently published performance benchmarks, but a closer examination of its VSAN testing shows why customers shouldn't expect the same results with their real-world applications.
Building an Information Security Policy Part 4: Addresses and Identifiers
Proper traffic identification through techniques such as IP addressing and VLANs are the foundation of a secure network.
SDN Strategies Part 4: Big Switch, Avaya, IBM,VMware
This series on SDN products concludes with a look at Big Switch's updated SDN strategy, VMware NSX, IBM's hybrid approach, and Avaya's focus on virtual network services.
Hot Topics
Converged Infrastructure: 3 Considerations
Bill Kleyman, National Director of Strategy & Innovation, MTM Technologies,  4/16/2014
Heartbleed's Network Effect
Kelly Jackson Higgins, Senior Editor, Dark Reading,  4/16/2014
White Papers
Register for Network Computing Newsletters
Current Issue
Twitter Feed