The True Cost Of Hyperconvergence

Vendors tout hyperconverged systems like EVO:RAIL as less expensive than more conventional storage systems. After pricing my own EVO:RAIL-like system, I found otherwise.

Howard Marks

December 1, 2014

8 Min Read
NetworkComputing logo in a gray background | NetworkComputing

After years of hearing vendors and their fanbois tell me that hyperconverged solutions would revolutionize data center economics, I was a bit surprised when I started hearing rumors that vendors were slapping price tags of more than $200,000 on their EVO:RAIL systems. An EVO:RAIL is four servers and the storage to support the VMs running on those servers, but a $200,000 price tag still seemed a bit steep to me.

So I set out to figure out if -- as the advocates promise -- hyperconverged systems are actually less expensive than their more conventional equivalents. What I found was quite the opposite.

One of the assumptions behind hyperconvergence is efficiency. Building our entire computing environment from the same basic building blocks allows us to manage those building blocks through what before now was an unobtainable single pane of glass. And since those building blocks are themselves made up of industry-standard if not commodity components, the initial cost should be lower than for the purpose-built hardware that makes up today's storage estate.

I've argued that one of the attractions of software-defined storage, of which hyperconverged systems can be considered a special case, is that the cost of a drive slot in a modern storage system actually costs more than the disk drive that goes into that slot. Since my servers -- unless I was so foolish as to be using blade servers -- have disk drive slots that I'm not using, I can use those slots for just the cost of the disk drive and, of course, the software.

When I set out to confirm the $200,000 price rumors for EVO:RAIL, I couldn't find detailed pricing for the Dell or EMC models. I did find an online seller offering the Supermicro SYS-2027TR-VRL002 at $160,000. That's a good 20% less than $200,000, but I still wanted to know if it was a good deal.

After a little time on the Internet and my usual Supermicro supplier sites, I could add up what it would take for me to build an EVO:RAIL style system. I should note that my system will lack the EVO:RAIL front end. I've played with the EVO:RAIL user interface, and it sure does make installing the system easier. But once the system is up and running, I think administrators will use the vCenter client for its better control than the EVO:RAIL simplified UI.

The Hardware (street price)

Part

Price

Qty

Ext

Supermicro Twin2Pro barebones system

4035

1

4035

Xeon E5-2620 V2 CPU

385

8

3080

16 GB ECC RDIMM

150

48

7200

1.2 GB 10K RPM HDD

500

12

6000

Intel DC S3700 400 GB SSD

700

4

2800

Intel X520 10Gbps NIC

450

4

1800

I also went to Dell's site and configured an R720, 2U rack-mount server, with the processors, memory, and storage equivalent to an EVO:RAIL node. It came in at $10,500 or $42,000 for a set of four.

The Software

EVO:RAIL includes a significant software bundle, including vSphere Enterprise Plus edition, the vCenter Server appliance, VSAN, Log Insight, and, of course, the EVO:RAIL setup and management tools.

Part

Price

Qty

Ext

vSphere Enterprise Plus

3495

8

27,960

VSAN

2495

8

19,960

Enterprise Plus support 1 yr

874

8

6992

VSAN support 1 yr

624

8

4992

vCenter Server

4995

1

4995

vCenter support 1 yr

1249

1

1249

Log Insight

250

100

25,000

I had a bit of difficulty assigning a value to Log Insight. All but the smallest data centers should have a log analysis solution, and Log Insight is a pretty good one. However, if you're already using Splunk or SumoLogic, it may not be worth $250 per log source device.

Assuming you want to use Log Insight for 100 devices, the total cost of an ersatz EVO:RAIL system is just under $117,000 or $43,000 less than a real EVO:RAIL. Even my R720 VSAN solution comes in at $133,000.

Next page: Dedicated storage comparison

Editor's note: Please see the update from Howard Marks in the comments section below. If you have other information or thoughts on this topic, please let us know in the comments.

"But wait," the fellow in the checkered suit cries out. "You're comparing our shiny new hyperconverged hotness one against the other. Surely things will be different when you compare us to dedicated storage." Never one to shrink from a challenge, I removed the disk drives from the R720 configurator on the Dell site and got a price of $5,780 per server.

I then had to figure out how much usable storage capacity an EVO:RAIL really delivers. With 14.4 TB of raw disk, I could store 7.2 TB of data at the default data protection level of two-way mirroring, which VMware calls failures to tolerate=1. I'm on record saying I don't think that's protected as well as RAID-6 on a dual controller storage system, but with just four nodes, EVO:RAIL can't support the three-way mirroring I prefer. VSAN needs five servers to support three-way mirrors. In a larger cluster of two or more EVO:RAILs, I'd get 4.8 TB of usable space.

Now, 7.2 TB leaves no spares, and best practice for a shared nothing cluster would be to ensure that there's at least enough free space to accommodate a node failure, so the system can rebuild without waiting for hardware to be replaced. Following those practices, a small environment with just one EVO:RAIL would really have 5.4 TB of usable space. Larger environments with more nodes would have less overhead.

Just because I had the configuration and price handy, I added a Tegile HA2100 storage system (600 GB flash, 14 TB of disk in RAID-6). That gave me:

Part

Price

Qty

Ext

R720

6000

4

24,000

vSphere

8,738

4

34,952

vCenter

6,244

1

6,244

Server total

65,196

Tegile HA2100

60,000

1

60,000

The EVO:RAIL system does have more flash and might perform a bit better than the Tegile, so let's substitute with the HP 3PAR StorServe 7200 all-flash array; it's only got 3.5 GB of raw flash, but with data reduction, it should easily match the 5 GB or so of the EVO:RAIL system. That gave me:

Part

Price

Qty

Ext

R720

6,000

4

24,000

vSphere

8,738

4

34,952

vCenter

6,244

1

6,244

Server total

65,196

HP 3PAR 7200

35,000

1

35,000

Oh, that can't be right. What if we use a Data Gravity NAS? It will host the user home directories, along with personas for VDI, so we don't need to create and manage Windows file servers for them, back itself up, and provide all sorts of analytics on the files. The entry model has 2.4 TB of SSD and 48 TB of disk for $75,000. Exactly how much data it will hold depends on how many copies of data you keep, but I'd be surprised if it was less than 12 TB of primary data. If the Data Gravity box can support 12 vSphere hosts, which it should, we're still talking about saving $64,000 over what three EVO:RAIL systems would cost.

Naturally, all these systems will also need 10 Gbit/s Ethernet switches, cables, and the like. That could add anywhere from $5,000 for a single workgroup class switch to $25,000 for a pair of enterprise-class, 24-port switches and twin-ax DAC cables.

Of course, the external storage in these solutions can also be used to support other workloads, from physical servers to the Hyper-V cluster they're running in the development shop.

I understand the value of a simple installation, but I don't understand why I would pay a premium of $20,000 or more for packaging. Wouldn't it be smarter to have a good VAR or consultant -- and, as a consultant for 30 years, I know there are some -- spend a day or two installing and configuring a system for $5,000 and spending some of the rest of the money you saved on some training for your admins?

I would do this job for you myself for $10,000 just to demonstrate that, as nice as an easy installation might be, it's not worth $20,000.

Disclosure: Dell, HP, and Tegile have been clients of DeepStorage LLC.

About the Author

Howard Marks

Network Computing Blogger

Howard Marks</strong>&nbsp;is founder and chief scientist at Deepstorage LLC, a storage consultancy and independent test lab based in Santa Fe, N.M. and concentrating on storage and data center networking. In more than 25 years of consulting, Marks has designed and implemented storage systems, networks, management systems and Internet strategies at organizations including American Express, J.P. Morgan, Borden Foods, U.S. Tobacco, BBDO Worldwide, Foxwoods Resort Casino and the State University of New York at Purchase. The testing at DeepStorage Labs is informed by that real world experience.</p><p>He has been a frequent contributor to <em>Network Computing</em>&nbsp;and&nbsp;<em>InformationWeek</em>&nbsp;since 1999 and a speaker at industry conferences including Comnet, PC Expo, Interop and Microsoft's TechEd since 1990. He is the author of&nbsp;<em>Networking Windows</em>&nbsp;and co-author of&nbsp;<em>Windows NT Unleashed</em>&nbsp;(Sams).</p><p>He is co-host, with Ray Lucchesi of the monthly Greybeards on Storage podcast where the voices of experience discuss the latest issues in the storage world with industry leaders.&nbsp; You can find the podcast at: http://www.deepstorage.net/NEW/GBoS

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights