Storage

01:15 PM
Connect Directly
RSS
E-Mail
50%
50%

DataCore's Storage Hypervisor: An Overview

DataCore's SANsymphony-V software provides a host of storage features, including auto-tiering, replication, and the ability to remove physical disks without disruption, that can be run on a variety of storage hardware.

A storage hypervisor is an emerging term used by some vendors to describe their approach to storage virtualization. Several companies offer storage hypervisors, including IBM, Virsto and DataCore. I've already written about IBM and Virsto in previous blogs.

Now it's DataCore's turn. DataCore is an independent software vendor (ISV), so it has no financial interest in selling the underlying storage hardware. It supports both virtualized servers and traditional physical hosts and legacy storage with the same feature stack and automation. DataCore's storage hypervisor is a software product called the SANsymphony-V. This blog will examine some enhanced and new features of the Version 9 release.

Auto-tiering Auto-tiering is a "hot" topic (pun intended!) with not only Tier 0 solid state devices, but also performance (SAS or FC) hard disk drives, capacity (SATA) and archived storage that can even be rented from public cloud providers at a distance. This feature also includes automatic tuning that creates heat maps to reveal heavy disk activity, so that the hottest data gets the most attention (in order to meet performance service level requirements). It also automates load balancing across the available disk resources.

Scale-out architecture DataCore also supports a scale-out architecture, which lets administrators add DataCore nodes non-disruptively to meet the growing I/O needs of larger IT data centers and cloud storage providers.

N+1 configuration Redundancy is built in to the storage infrastructure via an N+1 configuration that uses spare I/O bandwidth to absorb the loss of a node and its resources without compromising throughput. The financial benefits are significant and can result in substantial savings. N+1 configurations let administrators add a single node at a time, rather than the norm of 2N redundant systems. This provides granularity and much better amortization for customers that want to increase performance or redundancy but don't want the added cost that comes from having to purchase dual controllers or two nodes at a time.

Non-disruptive pool housekeeping DataCore also offers non-disruptive pool housekeeping, which means a disk drive can be removed from a virtual pool without affecting live applications. This eases processes such as decommissioning a drive due to age or physical problems, or shrinking the size of a physical pool to use the drive elsewhere; the data is automatically distributed among remaining disks without requiring failover or the need to replace the volume. The ability to make changes without affecting application availability should be very welcome to administrators.

Localized off-premises storage DataCore extends the basic principles of pooling, automation and central management to remote sites in addition to a local site; the storage hypervisor makes these processes transparent so neither users nor applications can tell the difference.

Replication and recovery The ability to localize off-premise storage lets customers leverage distant sites for disaster recovery; on top of the synchronous mirroring within metropolitan distances, SANsymphony-V can manage one-to-many, many-to-one and many-to-many asynchronous replication connections over long-haul links; one benefit of its bi-directional replication approach is that users can test disaster recovery procedures without impacting production or the ongoing replication of production data. This capability is very useful as disaster recovery testing is often ignored for a number of reasons, including the fact that it is hard to bring applications down even temporarily in 24-by-business environments.

Up Next:Two DataCore Customer Use Cases David Hill is principal of Mesabi Group LLC, which focuses on helping organizations make complex IT infrastructure decisions simpler and easier to understand. He is the author of the book "Data Protection: Governance, Risk Management, and Compliance." View Full Bio

Previous
1 of 2
Next
Comment  | 
Print  | 
More Insights
Hot Topics
5
Do We Need 25 GbE & 50 GbE?
Jim O'Reilly, Consultant,  7/18/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed