Blade servers were created a few years ago as a solution for the cabling and integration management issues of clustered systems. They're built using identical server modules mounted as plug-in “blades.” Typically, a couple of HDDs are included in each blade.
A set of blades is mounted in a blade chassis, with redundant power supplies. Intra-cluster networking is provided by a built-in Ethernet switch. These configurations make for a compact, ready-to-use cluster.
The blade server concept began with the idea of having low-cost, small form-factor servers in a cabinet. Configurations of 12 servers in a 3U cabinet looked like the sweet spot, and the aim was to cut space, power, and acquisition cost for ISPs servicing the LAMP market.
The concept of inexpensive, small-blade servers got lost in the post-9/11 crash of the industry. But blades survived as larger modules, with more memory. These units can’t be described as inexpensive. Some were even anodized gold to add a bit of “class” (and price)!
The blade server continues today, with a reasonable market size. Increasingly, densely packed server designs have nibbled away at the market, and the advent of Google-class CSPs and containerized data centers has renewed the emphasis on smaller footprints and lower costs.
Enter the microserver. This year, we can expect a microserver from all of the major vendors, and some of the Chinese original design manufacturers. Some, such as HP’s Moonshot and AMD's SeaMicro 15000, are already here. Now, we are talking about a chassis with as many as 64 microserver modules rather than the typical 10 blades, and packaged in about half the space. Microservers come with internal Ethernet switches, too.
Microservers are an interesting alternative to virtualized server clusters. They target the LAMP stack market, among others, and provide a way to offer edge services with low management costs. Any workload that is unchanging over a long period and isn’t I/O-bound can fit into the concept.
[Don't judge microservers by their size. Find out why in "Microservers: The Legos of the Data Center."]
Is there still room for the blade server? Blades service more sophisticated use cases. Fibre-Channel connectivity is available on blades, but not in the microserver, for example. The most significant differentiator is in horsepower. Each blade can hold a couple of 2-GHz Xeon processors, while a microserver will use a low-performance CPU, such as an Atom or an ARM.
If that status quo stood the test of time, blades would see limited impact from the microserver, but we are expecting higher powered Xeons and Athlons to be incorporated into microservers this year, which suggests the two technologies will collide in the market to some extent next year. With Xeon-class processors, the microserver will encroach on the hypervisor market that blades service, and will probably be a solution of choice for private clouds.
Blades will also see competition from 1/2U mini-server clusters, which can deliver a “poor man’s blade server.” Mini-servers are compactly packaged, typically as four or 12 units in a frame, and approach blades in capability. They are good virtualization platforms, with two Xeons and a bunch of memory. Containerized datacenters often use these in large quantities, because they don’t require the rear access that blade-server chassis demand, and also because they are cheaper.
In the end, the issue will be TCO, and blades are clearly going to be under pressure. There hasn’t been a major evolution of blade technology in the last five years, so it will be interesting to see if there is any response from the blade vendors to the threat posed by microservers.
Jim O'Reilly was Vice President of Engineering at Germane Systems, where he created ruggedized servers and storage for the US submarine fleet. He has also held senior management positions at SGI/Rackable and Verari; was CEO at startups Scalant and CDS; headed operations at PC ... View Full Bio