The State of Server Technology: 2010 to 2012

, August 06, 2012 As more companies turn to virtualization and private clouds, they're also taking a look at the server technology in their data centers. This is leading to a boost for the server market. Find out why in this look at the state of server technology, from 2010 to 2012.
  • E-mail

x86 Server Memory Capacity

The need for greater capacity may be best illustrated in the responses to the question "What memory capacity are you buying for your typical x86 servers?" In 2010, the money was being spent on 4 GB to 8 GB (31% of respondents); in 2011, 19% of respondents anticipated buying 33 GB to 64 GB, and 16% were thinking 65 GB to 128 GB. Going even higher, 129GB to 192 GB, which didn't even appear in the "2010 State of Server Technology" findings, was cited by 6% of respondents, and 7% were looking at buying more than 192 GB.

"An interesting artifact of the virtualization-fueled growth in server memory capacity is that it's actually happening faster than memory chip densities are increasing," wrote author Kurt Marko in the 2011 "State of Server Technology" report. "To compensate, systems must use either higher-density modules with double-stacked chips or more DIMM sockets. For example, some dual-socket systems now sport 32 or even 48 DIMM slots, which allows stuffing in a budget-busting 256 GB or more using 8-Gb modules."


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.