Jasmine McTigue

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

DIY Storage: Using Virtualization To Cut Costs

In my last column on DIY storage, I talked about engineering redundancy for inexpensive systems built on SAS direct-attached storage. These arrays are great because of the huge savings involved. But what if you just need a few terabytes of auxiliary storage for backups or archive and don’t want to purchase a dedicated piece of server hardware to run a storage management OS?

One innovative way to tackle auxiliary storage demands on a shoestring budget is to buy a direct-attached storage chassis and an interface card and retrofit it into an existing hypervisor host in your data center. You can pack that chassis full of near-line SAS or commodity SATA disks to further economize and then spin up a storage management VM, connect it to the DAS box and provision access to that storage over inexpensive gigabit iSCSI to the other hosts in the cluster.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

Voila, in about two hours you could have an extra 10, 20 or even 200 gigabytes of inexpensive archival, backup, or scratch storage available to the entire virtualization cluster. And if 1GbE isn’t enough to satisfy the I/O requirements, you can simply add more NICs for multipath, put a 10GbE adapter or two into the hypervisor host, or even add a couple FC adapters to switch that storage directly into an existing fiber mesh.

However, the problem with this approach is that it brings our inexpensive storage expansion plan back into the realm of single point of failure storage, rendering it immediately unsuitable for tier-one applications. So let’s expand our thinking a little. Suppose instead of connecting the new DAS chassis to only a single hypervisor in the data center, we connect it to two. The only prerequisites to this approach is that you have a couple of hypervisor hosts with a spare PCIe slot or two to accommodate a dedicated SAS card and maybe an extra network card or FC adapter for interconnect to the rest of the data center.

Now, instead of running only one storage management VM, we’ll run two: one on each host, either of which can provide access to the DAS array. With hardware in hand, all we’ll need is a compatible storage management OS, like Nexenta or Windows Storage Spaces on Server 2012 that’s capable of operating in an high-availability (HA) cluster.

Nexenta is by far the more feature rich, high-performance and robust option of the two, although to use HA with Nexenta, we need the commercial HA Plugin for the Nexenta product, which adds some cost but enables some significant performance features, such as shared ZIL SSD caching. Alternatively, Windows Storage Spaces is less robust and poorer performing, but still perfectly production ready with the added benefit that we don’t need anything additional to our Windows licensing to provision a clustered storage space.

[Scale-out storage isn't just for big companies anymore. Find out how vendors are making the technology available to smaller businesses by reducing its cost and complexity in "Scale-Out Storage Scales Down For SMBs"]

There are a few caveats to this approach as well. First, as I’ve mentioned in an earlier column, SAS is a dual-channel interconnect protocol. This means that the DAS box has to be a dual expander so that there are two discrete paths to the storage, one from each machine. Second, since we will need both of those channels to go directly to each disk, single-channel SATA disks are out of the running and we’re into commodity SAS to fill our new chassis. Third, there are some hypervisor and storage software specific features that we need to be aware of to configure high availability across hosts in our storage management VMs.

On VMware, we’re going to want to map both the SAS interface card and whatever storage mesh interface card we’re using directly to the storage management VM with DirectPath to ensure optimum performance. On Hyper-V, we’re a little bit more limited because virtualizing a ZFS-driven product such as Nexenta properly is problematic. Without VMDirectPath, we can’t pass storage interfaces directly through to the Nexenta VM, which can cause problems and dramatically limit performance.

If we need a truly enterprise storage software feature set like Nexenta, but need to run it on Hyper-V hosts, we could also choose a product like DataCore’s SANsymphony-V, which uses replication and active/active load balancing between high-availability node pairs that have their own local storage. This makes SANsymphony-V much less sensitive to guest operating system storage configurations than Nexenta, while providing a similarly enterprise feature set with features like asynchronous replication, PCIe SSD cache and Metro Mirroring.

The ultimate cost of acquiring storage this way varies. I/O requirements dictate base hardware costs and feature set does the same for storage management software licensing. But whether you’re engineering for massive scale data or just bringing some additional capacity online, leveraging commodity hardware and competitively priced or free storage software can yield savings of greater than 50% versus traditional enterprise storage vendors. In my final column in this series, I'll look at the risks, costs and benefits of DIY storage configurations.

Find out about the impact of VDI on storage infrastructures and innovative technologies that use RAM, flash and clever software to tame the VDI storage beast in Howard Marks' session, Storage Solutions For VDI at Interop New York this October.

Jasmine McTigue is the IT manager for Carwild Corp. She is responsible for IT infrastructure and has worked on numerous customer projects as well as ongoing network management and support throughout her 10-plus-year career.


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers