Storage

03:41 PM
Connect Directly
RSS
E-Mail
50%
50%

The State of Server Technology: 2010 to 2012

As more companies turn to virtualization and private clouds, they're also taking a look at the server technology in their data centers. This is leading to a boost for the server market. Find out why in this look at the state of server technology, from 2010 to 2012.
Previous
7 of 8
Next


The need for greater capacity may be best illustrated in the responses to the question "What memory capacity are you buying for your typical x86 servers?" In 2010, the money was being spent on 4 GB to 8 GB (31% of respondents); in 2011, 19% of respondents anticipated buying 33 GB to 64 GB, and 16% were thinking 65 GB to 128 GB. Going even higher, 129GB to 192 GB, which didn't even appear in the "2010 State of Server Technology" findings, was cited by 6% of respondents, and 7% were looking at buying more than 192 GB.

"An interesting artifact of the virtualization-fueled growth in server memory capacity is that it's actually happening faster than memory chip densities are increasing," wrote author Kurt Marko in the 2011 "State of Server Technology" report. "To compensate, systems must use either higher-density modules with double-stacked chips or more DIMM sockets. For example, some dual-socket systems now sport 32 or even 48 DIMM slots, which allows stuffing in a budget-busting 256 GB or more using 8-Gb modules."

Previous
7 of 8
Next
Comment  | 
Print  | 
More Insights
Slideshows
Cartoon
Audio Interviews
Archived Audio Interviews
Jeremy Schulman, founder of Schprockits, a network automation startup operating in stealth mode, joins us to explore whether networking professionals all need to learn programming in order to remain employed.
White Papers
Register for Network Computing Newsletters
Current Issue
Video
Twitter Feed