Can You Afford 384GB of RAM? Or How Much is Too Much Memory?
When Cisco announced its UCS last year, I was impressed by the fact that the double-width blades could hold a whopping 384GB of memory. Now it's budget time here at DeepStorage Labs, and I'm figuring how much we're going to have to shell out for the pair of Nehalem servers we're adding to the lab in next few weeks. Turns out if you try to even get close to 384GB, you'll end up paying several times as much for memory as for the rest of the server.
February 5, 2010
When Cisco announced its UCS last year, I was impressed by the fact that the double-width blades could hold a whopping 384GB of memory. Now it's budget time here at DeepStorage Labs, and I'm figuring how much we're going to have to shell out for the pair of Nehalem servers we're adding to the lab in next few weeks. Turns out if you try to even get close to 384GB, you'll end up paying several times as much for memory as for the rest of the server.
Here at the lab we used to cram as much memory in our servers as they would hold. As I started to price out our new gear, I got a bit of sticker shock when the Dell site showed I could upgrade an R710 to 192GB for just $55,060. At that price, a Cisco server with 384GB of memory would cost over $110,000 for memory alone. Now we have to ask: Isn't it better to just buy another server?
After all, the whole server w/72GB of memory and dual 5520 processors, which seems like the Nehalem sweet spot, balancing performance, features and cost, will set me back just under $6,000. Even if I buy vShpere Enterprise Plus and Windows Data Center, licenses from Dell 3 servers with their software totals about $57,000 vs $72000 for a single 192GB server and OS. Wouldn't you rather have 3x the compute power and 3x the I/O for less money?
Then there's the cost of connecting each server to the data center network. My quick calculations show that FcoE, 10gig Ethernet with FC and 6 1gig Ethernet with FC all end up needing around $5000 worth of networking gear to connect each server to the top of rack switches. Adding in the network costs makes the out-of-pocket $72,000 for three servers or $77,000 for one.
Of course you don't have to use the ridiculously expensive 16GB DIMMs (around $300/GB) but could use the $100/GB 8GB sticks instead, so the Cisco servers can hold 192GB of affordable memory vs. 96GB for everyone else. I'm sure there are cases where huge tracts of memory are a good idea but I don't see them on a regular basis. Especially not if we're talking about a two-socket x86 server. A big Unix box, maybe... How do you choose between scale up per server and scale out to multiple VMware hosts?
About the Author
You May Also Like