Servers have gotten boring. They're IT's version of the family sedan: a product that steadily improves but rarely inspires. Ever since x86 systems got good enough to capture most of the market (and, make no mistake, with only a 30% share and falling, non-x86 systems are well into the niche category for most enterprises), the server business now mimics the PC market. Everyone's on the Intel merry-go-round waiting for the next processor update, and they're all trying to differentiate their products at the margins. Yet as succeeding CPU generations add more and more horsepower to systems that are often underutilized in the first place, and as chips get hotter and systems smaller, something's got to give. Microservers just might be the disruptive concept that recalibrates server designs to today's application and data center needs.
Microservers favor density and power efficiency over raw performance and are a counterweight to hardware that's increasingly overkill for many IT tasks. But like any nascent product category, pinning down a precise definition of attributes and features of microservers is like nailing Jell-O to the wall. There are several CPU and system architectures vying for mindshare and eventual market acceptance, but a couple of key properties stand out. Microservers feature very low-powered processors, typically 10W or less, but certainly under 20W. By comparison, the Xeon E5-series CPUs used in most new servers typically run between 95 and 130W. Microservers also offer very high density, often 20 or more CPU sockets per rack unit.
So far, there are two established approaches to dramatically reducing server power consumption, delineated by two very different CPU architectures: x86, primarily using a variant of the tablet-focused Intel Atom processor, and ARM, a processor core famous for powering most of today's mobile devices but that now serves as the foundation for a new generation of integrated SoC (system on a chip) tailored to servers.
Aside from using different instruction sets, x86 and ARM servers also differ in performance and address space: All but the lowest-end Atoms are 64-bit processors, while ARM is still stuck in 32-bit land. That will change sometime in the near future, as ARM is expected to announce its 64-bit v8 core at ARM TechCon in a couple weeks. But Karl Freund, VP of marketing at Calxeda, one of the leaders in ARM-based server components, says it will be at least a year before we see products using variants of this next-generation core. Freund also expects to see increasing differentiation of ARM-based chips. Some vendors, like Calxeda, will stick with the standard ARM core, preferring to add value by integrating fabrics and management features on the chip, while others, like Marvell (used in Dell's Copper server), modify ARM's design to eke out extra performance.
From an application perspective, the big difference between Intel and ARM parts is the instruction set. Atom processors use x86 and so can run any OS and application that works on the fastest Xeon. In contrast, ARM has its own set of instructions, meaning applications and operating systems must be recompiled for the platform. There are several Linux ARM implementations, including everyone's favorite, Ubuntu, but support for Windows server and bare metal hypervisors like VMware ESXi is off the table. Although Microsoft has released an ARM version of Windows 8 (RT), it's been mum on whether a Server 2012 variant is in the works, so don't expect anything before the first service pack release, if then.
While Intel still makes 32-bit Atoms for mobile devices, existing server implementations, like SeaMicro's category-defining SM10000 series, use the slightly more power-hungry but far more capable 64-bit, dual-core chips. Current microservers use the older-generation (45 nanometer process) Pineview N570 part. However, look for systems to migrate to the newer (32 nm) Saltwell core (the same used in the latest Windows 8 Clover Trail-based tablets), which in server form comes as Intel's Atom S Centerton processors (specs here), this winter.
The microserver concept was really defined by a new generation of server startups like SeaMicro (now part of AMD), Calxeda and Tilera; Intel was lukewarm to the idea until recently. Aside from Tilera, most of the non-Intel designs use ARM processor cores on a custom SoC. Much like Apple wraps custom I/O and graphics circuitry around a couple of ARM cores in its A5X and A6 iPhone/iPad chips, so, too, have companies like Calxeda and Marvell modified the basic ARM design for server duty.
Although the new Atoms should outpace existing ARM designs in raw performance, the other big consideration for server buyers goes deeper: The current ARM architecture isn't a true 64-bit design. Most products, like Calxeda's EnergyCore processor, have a 32-bit instruction length and address space. Although Marvell's Armada XP, used in Dell's Copper product, does support 64-bit memory addresses, it's still a 32-bit part, with a 32-bit instruction set and data bus.
Technology details aside, the bottom line is that microservers provide more than adequate performance for many IT workloads at a fraction of the up-front and operational cost. The power savings alone are startling. Calxeda has measured a 24-node system, clocked at 1.1 GHz, with 24 SSDs and 96 Gbytes of memory at less than 200W under maximum load. Yet this same system, on a core-by-core basis, yields about 80% of the Web server performance (using ApacheBench) of low-power Xeon E3 using one-twentieth the power (about 5W per CPU).
Stacking dozens, if not hundreds, of individual CPUs in a server chassis, with plenty of power for common Web application and content distribution workloads at a fraction of power and cooling usage of a conventional server, seems like a compelling proposition, particularly in older data centers that can't power and cool a 15- to 20-kW server rack. Microservers, by better matching server hardware to customer needs, might upend the server market just as tablets have dethroned PCs as many end users' client of choice.
Kurt Marko is an IT industry veteran and contributor to Network Computing and InformationWeek.