02:36 PM
Jasmine  McTigue
Jasmine McTigue
Repost This

DIY Storage: Why I Built My Own SAN

I built my own SAN to meet some storage needs at my job. In a series of posts I’ll cover the reasons why, how I did it, and the hardware and software I used.

This is the first column in a series about building your own IT storage. I’ll discuss why and how I’ve done it, look at the technological and business drivers behind my decision, and share how I weighed the risks and benefits of the DIY route.

My company is an ISO-certified medical device manufacturer. We run three locations in two different countries, including a large manufacturing plant in the Dominican Republic. We have an IT staff of five full-time employees in both countries, and support a diverse application portfolio that includes the usual things like email, Web collaboration and multisite distributed file sharing.

We have some specific applications that are I/O- and resource-intensive, including Epicor's E9 enterprise ERP software. We also run specialty software for ISO compliance, quality management and product labeling.

Like other IT professionals, I've got a variety of storage needs. It seems like no matter how well I scale up, there's always a shortage of something. Sometimes it's space--15TB of backup, anyone? Other times it's I/O or latency. There are plenty of products on the market that can meet these needs, but they're often outside the reach of my budget.

I mostly run a Dell shop. Dell quotes me consistently lower prices on the same commodity hardware than sources such as CDW or PCMall, and some real deals are possible at the end of the quarter (with a little negotiation). Plus, I get enterprise replacement and support contracts.

That said, Dell going private with the support of Silver Lake Partners makes me wonder if those end-of-quarter deals are going to evaporate. That, in turn, makes me think I need other options.

Recently, I started shopping to fill the remainder of a Dell SAN, the MD3220 SAS unit. Dell wanted about $430 for a 1TB Seagate constellation disk. I dug into the Dell docs to find the model, and, sure enough, that same disk is available on for $250 (without any volume discounts or deals). Through MALabs I can get it for $225. That's a $205 in savings, less than half Dell's base starting price. Even aggressive negotiation with Dell wouldn't get me those kinds of savings.

Then I looked into purchasing SLC SSD disks for the same array. Talk about sticker shock--Dell wanted $3,000 for one, even though I can get the same drive online for $1,500 bucks. I also discovered I can get a high-grade MLC SSD with even better specs for $680 through a different distributor.

[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]

But there's a problem, and this is ubiquitous in the SAN industry. You can't add commodity drives, even the same model as the vendor sells, because the drive won't have the vendor's firmware. Downloading the manufacturer's firmware update utilities and trying to flash the drive with supported vendor firmware doesn't appeal to me. I'm confident I could reverse-engineer the firmware update to a point where it would work, but I'd be out of support, and possibly mistrustful of the modified drive's integrity. It's not the way to host enterprise apps.

I don't like getting screwed by unjustifiable margins, so I decided it was time to do things differently.

Enter DIY

Around this same time, a former client of mine had returned a "budget" server I built for a four-person office. The server has $125 Zotac motherboards with integrated Atom 330, 1.6GHz dual core with 4GB (the max) memory. The Zotac ITX board has four SATA channels, two of which were running RAIDed 500GB drives for file storage and OS, and one running a SATA CD ROM. The net cost of this "server," minus Windows licensing, was about $350 with drives. If I had just bought the board, chassis and memory, it would have been a measly $225.

This happened at a serendipitous time at the office. I was unsatisfied with how some enterprise apps were being backed up, and I wanted more storage space, but my budget really demanded some smart buying.

So, I bought four 3TB 3.5-inch commodity SATA disks ($500) and a dual-port Supermicro PCIe network card ($80) to complement the Nvidia nForce network card that came with the Zotac. The board also had a mini PCIe slot intended for use with a Wi-Fi adapter. SIL Technologies makes a two-port SATA board for mini PCIe, so I bought that, too ($49).

Now I have six channels of SATA2 connectivity, 12TB of raw space and two server-grade iSCSI NICs that I can use receive side scaling to distribute across the Atom's whopping dual 1.6GHz cores. Add two more drives for a total of 18TB raw space, and a bigger PSU for plenty of power.

Here's the total cost, excluding the $225 in parts I reused: board, memory, chassis ($225); 18TB of 3TB SATA ($750); network card ($80); SATA PCIe mini card ($50); and power supply ($50). My grand total was $1,155 in hardware.

This is hardly a mission-critical storage array. But it's entirely suitable for auxiliary backup capacity and non-critical VMware ISO storage. If the thing dies, I'm not going to be upset. In the meantime, having the extra storage available via iSCSI is certainly a nice option.

The question then became the following: What OS can I put on this unit that will provide solid software RAID (because two of the drives are on a different card and I'd prefer Raid 50) and reliable performance for iSCSI multipath dual active network connections?

There are a lot of options, and some serious pros and cons involved. In my next column I'll look at the SAN software landscape and explain why I decided that Windows Storage Server 2012 was my OS of choice ... for the moment.

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Jasmine J. McTigue
Jasmine J. McTigue,
User Rank: Apprentice
4/3/2013 | 5:54:56 PM
re: DIY Storage: Why I Built My Own SAN
Dell is a slave to quarter end numbers right now. Because they will no longer have to meet quarterly number expectations as a private firm, I have to think they may be less willing to heavily discount when they are already among the lowest bidders. Further, their acquisition of best of breed technologies across all tech verticals (Sonicwall, F10, Quest to name a few) makes me think that they are essentially re-branding themselves as the new IBM. Pay more for Dell, we're worth it.


Thanks for the compliment, and that's exactly what I'm thinking about older hardware. Why can't it serve a purpose in the third tier?

As far as software RAID, you raise an interesting point. If you are running a dedicated storage operating system, is it software raid anymore? I mean when you buy a Netapp or EMC SAN, the controller is running some kind of software that's performing raid parity calculations. What's the difference between a dedicated storage operating system and running hardware raid? Is it RAID specific ASICs? Does it turn into latency and performance? Considering that budget SANs provide processors with say dual 800mhz RISC chips and some paltry amount of cache, what's the difference between that and my Dual core ATOM and 2GB of cache allocation from memory?

What I'd really like to see, and just don't have the resources to work with yet, is some comprehensive performance testing on say a hardware RAID 50 versus a Nexenta driven ZFS storage array on the SAME BOX. Is software raid from a dedicated storage operating system enough? SanSymphony certainly thinks so, and that runs on windows!
User Rank: Apprentice
4/1/2013 | 1:34:28 PM
re: DIY Storage: Why I Built My Own SAN
Here is that column, "Dell's Future: 3 Wild Cards CIOs Should Understand"

User Rank: Apprentice
3/30/2013 | 2:23:36 PM
re: DIY Storage: Why I Built My Own SAN
Great Column, Jasmine.
We also run a Dell shop, and thanks to virtualization we are leveraging our older servers for backup storage, hyperV and ESXi testbeds.
I agree it's not worth breaking your maintenance contracts with non-branded drives, but on these legacy servers we can add newegg ram, cards and drives for short money. Recycle, Repurpose, Reuse!
I think lefthand used to offer a free hypervisor for converting servers as isci storage(I think HP bought them out and it might cost $$$ now)
Regarding your comment about RAID50, I agree with you (and Scott) that is the best raid config. However, my distaste for software RAID's compels me to suggest that you use hardware raid and stripe for performance and space utilization. As you stated, it's auxiliary backup, if a drive fails and a raid set breaks, replace drive and the next nights backup takes care of it.
I look forward to your next post.


User Rank: Apprentice
3/29/2013 | 7:19:29 PM
re: DIY Storage: Why I Built My Own SAN
It's interesting that you think Dell going private might make it LESS willing to discount. Maybe not having to be on the Wall St. profit margin treadmill will give the company more freedom to match prices offered by NewEgg, etc.

We recently did a column about this on IW, the author made an excellent point about using the occasion to revisit contracts and agreements, I will look for the link. Lorna Garey, IW Reports
More Blogs from Commentary
SDN: Waiting For The Trickle-Down Effect
Like server virtualization and 10 Gigabit Ethernet, SDN will eventually become a technology that small and midsized enterprises can use. But it's going to require some new packaging.
IT Certification Exam Success In 4 Steps
There are no shortcuts to obtaining passing scores, but focusing on key fundamentals of proper study and preparation will help you master the art of certification.
VMware's VSAN Benchmarks: Under The Hood
VMware touted flashy numbers in recently published performance benchmarks, but a closer examination of its VSAN testing shows why customers shouldn't expect the same results with their real-world applications.
Building an Information Security Policy Part 4: Addresses and Identifiers
Proper traffic identification through techniques such as IP addressing and VLANs are the foundation of a secure network.
SDN Strategies Part 4: Big Switch, Avaya, IBM,VMware
This series on SDN products concludes with a look at Big Switch's updated SDN strategy, VMware NSX, IBM's hybrid approach, and Avaya's focus on virtual network services.
Hot Topics
Converged Infrastructure: 3 Considerations
Bill Kleyman, National Director of Strategy & Innovation, MTM Technologies,  4/16/2014
White Papers
Register for Network Computing Newsletters
Current Issue
Twitter Feed