Storage

10:40 AM
Connect Directly
RSS
E-Mail
50%
50%
Repost This

Is Intel's Romley for You? Getting the Most Out of the Xeon E5-2600

Intel's Xeon E5-2600 supports PCIe 3.0. The increase in performance can substantial, but it can also be lost if I/O peripherals aren't up to the task. Here are the details on the new Romley chips and how to best take advantage of them.

Romley has been one of the most anticipated server platforms from Intel in many years. Romley is Intel's codename for the server platform combining the Sandy Bridge-EP CPU and Patsburg Platform Controller Hub chipset. Finally, on March 6, Intel rolled out a major product launch with a focus on the CPU and its official name--the Xeon E5-2600.

Designed for cloud, enterprise and high-performance computing (HPC) server applications, the Xeon E5-2600 family of processors effectively replaces the Xeon 5500 and 5600 processors by delivering more processing power, cache, memory addressing and I/O bus bandwidth. The Xeon E5-2600 betters the 5600 by adding two more cores, 8 Mbytes more cache, support for six more DIMMs of faster DDR3-1600 memory (increasing total memory capacity to 768 Gbytes), double the I/O bandwidth with PCIe 3.0, and more Intel QuickPath links between processors.

On top of all that, Xeon E5-2600 processors consume less power and I/O latency is lower. With the I/O hub integrated into the processor, high-bandwidth, low-latency I/O is now free with any server using the chip. With all this, the Xeon E5-2600 drives a new level of server performance.

To fully exploit the capabilities of servers with the Xeon E5-2600, a broad ecosystem of products surrounding the powerful new processor must undergo a technology refresh, and server adapters is a segment of the ecosystem most affected by new Intel processor technology.

Before Romley, HPC was the exclusive domain of high-bandwidth, low-latency server I/O, like InfiniBand and purpose-built Ethernet adaptors. Starting this year, Xeon E5-2600 processors will drive the need for ever-higher bandwidth and ever-lower latency into enterprise environments. In this new era, application servers from Main St. to Wall St. will be configured for specific levels of bandwidth and latency.

To keep pace with new processors, the HPC server adapter industry continues to evolve. At the turn of the millennium, 1-Gbit Ethernet emerged to replace 100-Mbit Ethernet for high-performance server connectivity to networks. Around 2006, 10-Gbit Ethernet technology appeared in the core of enterprise networks and as an HPC cluster interconnect. By the end of 2012, server adapters with 40-GbE ports will emerge, followed by the availability of server adapters with 100-GbE ports by 2018. From 2000 to 2018, server adapter latency for HPC applications will be cut in half approximately every 12 years. The baseline for HPC-class server connectivity in the Romley era is now 10 Gbps of bandwidth and 2 microseconds of latency. Frank Berry is CEO of IT Brand Pulse, a company that surveys customers about their perceptions on vendors and their products. Berry is a 30 year veteran of the IT industry including senior executive positions with QLogic and Quantum. View Full Bio

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Hot Topics
3
Converged Infrastructure: 3 Considerations
Bill Kleyman, National Director of Strategy & Innovation, MTM Technologies,  4/16/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed