Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

The State of Server Technology: 2010 to 2012


  • "Doing more with less" isn't just a cliché at this point for IT administrators--for some, it's the only way they've ever known to work. That's led to a rise in consolidation, virtualization and cloud computing, all of which make it easier for admins and end users to do their jobs. Ironically, that's led to a surge for the server market, as admins anticipate getting more money to buy more servers, according to InformationWeek Reports' most recent entry in its "State of Server Technology" series.

    The August 2011 survey found that of 676 respondents, 23% planned to add servers and capability in 2011 and 2012, and 8% intended to increase their overall number of servers. In a turnaround from 2010, when 42% of respondents planned to decrease and consolidate their overall server count, only 33% reported they'd do so for 2011-2012. A respondent to the "2010 State of Server Technology" actually cited virtualization as a factor in hardware reduction.

    "The move to virtual servers for high availability and redundancy has become our goal," said one survey respondent. "Not only does this reduce the hardware footprint, but it also reduces power and cooling needs. We are also looking at solid-state drives. This will improve efficiency, reduce power consumption and increase storage availability."

    So why are fewer respondents planning to decrease their server count? It goes back to consolidation, according to Kurt Marko, author of the 2011 report.

    "Consolidation--collapsing more work onto fewer servers--is the primary use case driving technology development in the server market, and it has implications on everything from the processor architecture (more cores are in) and memory subsystem (consumption is exploding) to network and storage I/O (feeding data to all those VMs is difficult on legacy networks)."

    Yet, despite the plans to add servers and capability, server budget plans for 2011 and 2012 stayed roughly the same as the numbers found in 2010. In both 2010 and 2011, 11% of respondents anticipated significant increases in their budgets; 26% said the budgets would increase slightly in 2011, while 39% expected things to remain the same.


  • Why are companies buying server hardware? The largest jump in factors cited was in private-cloud implementation; while 8% of respondents to the "2010 State of Server Technology" planned to launch private clouds, 16% intended to do so in 2011.

    "Private clouds, with a new breed of optimized applications built to capitalize on machine diversity and parallelism, could become virtualization's second wave," explained author Kurt Marko in the 2011 "State of Server Technology" report.

    The other factor to see an increase from 2010 was the need for processing capacity. Virtualization and higher consolidation ratios led the list of applications stressing existing server capacity, at 55%; 47% of respondents cited big data and other business intelligence and analytics needs.


  • The most recent "State of Server Technology" report found that data centers are still dominated by 2U (81%) and 1U (78%) platforms--those numbers are just slight drops from the 2010 numbers of 83% and 78%, respectively.

    The only increases were found in the use of 8U boxes, which was cited by 37% of respondents in 2010 and 38% in 2011, and mini-towers, which had exactly the same numbers. Modular and micro platforms, which came in at 20%, weren't tallied in 2010.

    "Despite all the vendor hype around the efficiency of blade servers and their suitability for converged, virtualized I/O fabrics, blade use hasn't grown among our respondents as we would have expected," wrote author Kurt Marko in the most recent report. He cited fears of vendor and platform lock-in as a potential factor in the lack of growth.


  • When it comes to evaluating servers, what was important in 2010 has remained important, for the most part. Configuration again ranked 4.3 on a scale of 1-5 (five being most important) to lead the list of server evaluation criteria according to the "State of Server Technology." That was followed by processor architecture (4.1, as in 2010) and cost of initial purchase (also 4.1).

    The only changes were found in the importance of ROI, which slipped from 4.1 to 4, and the form factor, which went from a ranking of 4 to 3.9


  • As with server evaluation criteria, the list of important technical features stayed much the same between the "2010 State of Server Technology" and 2011 "State of Server Technology" reports. Memory and processing capacity were the highest-ranking technical features considered for 2011 and 2012 purchases, both holding steady at 4.4 on a scale of 1-5 (five being most important).

    Networking speed and efficient power and cooling were the only features to increase in importance. Networking speed/number of interfaces supported on the motherboard scored a 4 in 2010 and rose to 4.1 in 2011 to rank third on the list of features, surpassing redundancy.

    "Yet another sign of virtualization's impact on server buying is that network I/O has displaced system redundancy, historically an important attribute for server buyers but now taken for granted, as the third most important technical feature," wrote author Kurt Marko in the 2011 report. "Other attributes, like energy efficiency, brand and internal storage, are tertiary at best."


  • The server market still relies on the Intel x86, to a huge degree--while its usage dipped, the drop was only 1%, from 95% of the 569 respondents to the March "2010 State of Server Technology" report, to 94% in the most recent "State of Server Technology" survey. Its nearest competition comes from the AMD x86, which rose from 55% to 57%. The Intel Itanium held steady at 38%.

    What's behind the lock that Intel and, to a lesser degree, AMD enjoy on the server market? Kurt Marko, author of the 2011 report, explained: "The performance gap has effectively closed for all but the highest-end devices, while x86 chips have made great RAS (reliability, availability and serviceability) improvements. Furthermore, today's hypervisors have brought mainframe-style application partitioning to commodity hardware, while the

    manufacturing economies of scale that accrue by using essentially the same process technology and CPU architecture as that employed on mass-market PCs have left the ex-RISC CPUs uncompetitive on price."

    Processor family, however, ranked just 3.9 on the scale of important processor architecture features. Faster overall performance (4.4) and memory bandwidth and performance (4.2) were of greater concern.


  • The need for greater capacity may be best illustrated in the responses to the question "What memory capacity are you buying for your typical x86 servers?" In 2010, the money was being spent on 4 GB to 8 GB (31% of respondents); in 2011, 19% of respondents anticipated buying 33 GB to 64 GB, and 16% were thinking 65 GB to 128 GB. Going even higher, 129GB to 192 GB, which didn't even appear in the "2010 State of Server Technology" findings, was cited by 6% of respondents, and 7% were looking at buying more than 192 GB.

    "An interesting artifact of the virtualization-fueled growth in server memory capacity is that it's actually happening faster than memory chip densities are increasing," wrote author Kurt Marko in the 2011 "State of Server Technology" report. "To compensate, systems must use either higher-density modules with double-stacked chips or more DIMM sockets. For example, some dual-socket systems now sport 32 or even 48 DIMM slots, which allows stuffing in a budget-busting 256 GB or more using 8-Gb modules."


  • While there was little change in the list of servers currently used or anticipated in the 2011 "State of Server Technology" report, there were some jumps in the percentages. Dell PowerEdge remained at the top of the list, cited by 71% of 676 respondents in 2011, an increase from 67% in 2010. "This is likely because of Dell's well-earned perception as delivering the most bang for the buck," wrote report author Kurt Marko.

    HP ProLiant held steady at No. 2 but saw a drop from 63% in 2010 to 60% in 2011. Placing fourth with an increase from 31% of respondents to 37% was the IBM System x (x86). The companies placed similarly in the perception of their vendor value propositions.