Microservers are bite-sized systems using multiple low-power processor cores grafted onto an SoC replete with cache memory, I/O circuitry and hardware accelerators. However, in their earliest incarnation, microserver CPUs were too slow, they used a 32-bit instruction set, and didn't support enough memory for server duty. The original Atom and ARM chips, designed primarily for mobile devices, were simply inadequate to power servers.
Poor performance and no applications meant few customers. Indeed, InformationWeek's State of Server Technology survey (registration required) found that only 2% of respondents had purchased a high-density, low-power microserver with a meager 5% even seriously considering them.
All of those objections will soon be ancient history. Both ARM and Intel, the two leading contenders in the microserver architecture competition, are releasing new 64-bit products using next-generation process nodes that will substantially improve performance and memory capacity while still fitting in a 5-20 Watt per SoC power budget.
Intel claims its new Avoton parts, christened the Atom C2000 family, deliver up to 7-times the performance than the prior generation (Centerton) S1200 while supporting up to 64GB of memory.
[Don't judge microservers by their size. Find out why in "Microservers: The Legos of the Data Center." ]
Meanwhile, AMD's latest product roadmap highlights an 8- and subsequent 16-core SoC, code named "Seattle", using the 64-bit ARM Cortex-A57 cores that the firm claims will deliver "2-4X the performance of AMD's recently announced AMD Opteron X-Series processor with significant improvement in compute-per-watt."
Andrew Feldman, co-founder of SeaMicro, and now a VP and GM at AMD after AMD's acquisition of SeaMicro, said that by 2015 ARM will deliver comparable performance to Xeons of a couple years ago. His projections are more than idle speculation since tests show that the first shipping 64-bit ARM chip, Apple's A7 used in the iPhone 5S and new iPads, benchmarks with roughly the same score as a 2010 Mac Mini using a 2.66GHz Core 2 Duo.
Clearly, microserver CPUs are closing the performance gap, but we're still 6 to 12 months from having production servers using the new components.
While pioneers like Calxeda and SeaMicro continue to push the technology (which got a shot of needed credibility when Verizon selected SeaMicro systems to power its new IaaS offering) in the foreseeable future microservers as compute engines only make sense for big service providers running millions of workloads, not the typical enterprise IT shop.
Meanwhile, there are plenty of other applications in the data center where microserver-like systems are a perfect fit; namely as the hardware engines for network and storage appliances.
Shannon Poulin, Intel's VP and GM of data center marketing says appliances and fixed-function boxes will comprise the majority of design wins for its new Atom products, Avoton and Rangeley, a variant optimized for communications tasks using the same processing cores with added I/O subsystems.
At Intel's Reinventing the Data Center event last summer, a common theme among Intel executives was how the company sees its processors, particularly low-power cores built into application-specific SoCs, being used in a wider variety of hardware, not just traditional servers.
During a breakout session, Jason Waxman, GM of Intel's Cloud Infrastructure Group pointed out that the company has perfected an SoC design methodology that allows cranking out various combinations of integrated modules for different applications. Intel's appliance strategy appears to be working as it already has over 50 design wins for the Avoton and Rangeley products ranging from blade and ToR switches to security appliances (UTM) and cold storage arrays.
Calxeda also sees the potential of microservers to improve density, efficiency and performance in appliances, notably storage arrays. At this year's Computex event, the firm announced agreements with three major ODMs, Aaeon, Foxconn and Gigabyte, to use Calxeda's EnergyCore SoCs, which integrate up to four 32-bit ARM A9 cores, internal fabric switch, memory controller and 10 Gigabit Ethernet transceivers.
For example, Foxconn's 4U design incorporates 60 drives slots and 12 Calxeda SoCs, packaged as 6 dual socket modules, to power a network storage array with 12 10 Gigabit Ethernet ports, i.e. one per every 5 drives. Since each quad-core SoC manages only five drives, it eliminates the need for separate HBAs, which simplifies the design and further reduces power consumption.
To harness all this storage capacity, /Calxeda has partnered with Inktank to optimize the Ceph Distributed Storage System for the ARM platform. Ceph allows building redundant, self-healing storage pools that span multiple servers such that losing a single system doesn't bring down the entire pool.
Using ARM-based servers allows building larger clusters using denser drive arrays, thus improving storage density, power efficiency and overall cluster availability, since adding nodes reduces the aggregate failure rate and the time and resources needed to automatically repair a distributed filesystem.
The coming generation of low-power 64-bit SoCs means that it's too early to write off microservers as cloud compute engines there will be a plethora of server products using Avoton, ARM-A57, even the Power architecture, as in the just-announced Servergy systems.
However in the near term, embedded microservers in storage and networking appliances are likely to be a more common application of the technology, particularly in the enterprise. So don't be surprised when your next switch says Intel Inside or the guts of your storage array have more in common with your phone than your PC.