In this, Part 4 of my DIY storage column, I want to talk about RAID controllers, redundancy and making the storage accessible to other servers. First, let’s talk about the specifics of JBOD enclosures.
JBOD enclosures built for SAS come in two different flavors--single or dual expander--which is important when we get into redundancy. A single expander JBOD of a reputable brand will have one host connectivity port and two expansion ports for cascading purposes. This means that we can’t attach multiple hosts or create redundancy at the controller level.
A dual expander enclosure guarantees that the drives in the enclosure are accessible via both SAS paths. But this can be a problem with SATA III drives, which are single port and cannot share I/O across controllers, even if they can still failover to a different expander in the event of controller failure.
One way to solve this problem is to make SATA III drives dual port with an interposer. SATA interposers are an intermediary piece of circuitry that can permit a single port SATA disk to successfully process dual port commands in real time. LSI makes a good chip, but there are others and the cost per drive is relatively inexpensive.
So, now you want to take your DIY SAS/SATA box and turn it into a real SAN, meaning dual controller and redundant everything. Provided that your JBOD has a dual expander backplane, this is pretty easy. On the first storage controller server, you connect your SAS RAID controller to host port one on the JBOD. On the second storage controller server, you connect your RAID controller to host port two on the JBOD.
Even if you cascade four or more JBODs off the primary one, you have total accessibility of all those disks to controllers in both boxes. If you’re only looking for failover, you can use non-interposer SATA disks in those JBODs because only one path will be active at a time. But if you’re looking for an active/active controller, you’ll need 100% SAS drives or interposers on every SATA disk.
Note that if your controller has dual SAS outputs, you can use the second output on each controller to drive a whole secondary series of cascaded JBODs. If not, you’ll be adding additional RAID controller cards to achieve the same goals. Regardless, with this type of system, you could easily add hundreds and hundreds of drives worth of shared, high-availability (HA) storage to a single storage server HA cluster.
And if that’s not enough, add some SAS switching (16 ports costs about $3,000 from LSI), which will allow you to zone and add even more JBODs to your new DIY array.
[Planning to build your own data center lab? Find out details for building a solid foundation in "Building a Lab: Patching and Cables."]
Now, what about the software that drives a HA DIY system with two or more nodes?
If you want to handle your RAID in the hardware and take advantage of cool caching products from LSI, you can run your HA at the controller layer with LSI’s HA firmware. This allows up to two boxes per JBOD without switching and unlimited storage controller hosts per JBOD with switching. In this case, the HA happens via the RAID controller, so you provision your arrays at the controller level--including features like caching and FastPath--and then install the corresponding operating system drives to share the storage I/O workload.
But there are many other options. With products like Windows Storage Server 2012 or ZFS-based technology from Nexenta, the HA takes place at the hardware layer and RAID configuration is not done at the card at all. Instead, the drives are presented to the storage controller operating system with no hardware RAID, and RAID sets are created by the storage processing operating system running on each node. While this may mean you have to abandon neat features like CacheCade and FastPath, storage control software products like Nexenta's have their own comparable features--some with even more capabilities.
What if we want to throw virtualization into the mix? In my next column, I'll discuss virtualizing SAS HA systems. In the meantime, if you have comments or questions about building your own array, please use the comments section below.