Special Coverage Series

Network Computing

Special Coverage Series

Commentary

Howard Marks
Howard Marks Network Computing Blogger

Whiptail Breaks Vendor Lock-in On Storage Replication

Replication between data centers typically required customers to buy two of the same storage arrays from a vendor. Storage company Whiptail has added heterogeneous replication to its array, which lets customers use Whiptail and any other system.

Array-based replication is the foundation of disaster recovery plans for most large organizations. Companies rely on matched pairs of disk arrays, each based in separate physical locations, to protect their data. The problem with array-based replication is that it only works between arrays from the same vendor. Now, all-solid-state array vendor Whiptail is breaking the tradition of vendor lock-in by replicating from its Accela array to just about any storage.

Once the storage industry mastered the basics of RAID and could expect an external disk system to survive the failure of a disk drive without crashing, or without losing data, the next step was to protect customers from array failures.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

This is achieved by replicating the data to another array, preferably in another data center. Replication was one of the first "value-added" features to appear on disk arrays and is available, often at extra cost, on all but the simplest arrays. The one catch for customers is that replication is homogenous--that is, you can only replicate data between arrays from the same vendor family. That's because array vendors have no incentive to support heterogeneous replication. If your shiny new ExaStor 7000 could replicate to anything, you might simply move your old disk array to your DR site instead of buying a second ExaStor for DR.

For a long time I've wondered why no one has simply leveraged iSCSI for heterogeneous replication. After all, an iSCSI array could have an internal iSCSI initiator that mounted a LUN on some other iSCSI array. It could then simply mirror its local LUN to the remote one.

Of course, it gets more complicated if you want asynchronous replication, and there's no simple way to implement WAN optimization, compression or any of the myriad features you can add when you control both ends of the link. That said, I still think there's value in support for some level of basic replication to dissimilar hardware.

[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]

Whiptail's open-target replication manages to replicate data to any arbitrary storage system through the use of an application agent at the receiving end that runs on a Windows or Linux server. The application implements that same replication protocol that Whiptail's Accela arrays do, so it looks like another array device to the source array.

This architecture is a step up from what I imagined with my iSCSI replication model. First, Whiptail encrypts the replicated data in flight, which would otherwise require IPSEC or a VPN without the catcher application. In addition, Whiptail does snapshot-based point-in-time replication.

While this type of replication has a minimum practical RPO (recovery point objective) of 15 minutes or more, it has several advantages over real-time asynchronous replication. Because multiple writes to the same block over a snapshot period are aggregated, the point-in-time replication uses less bandwidth and is less sensitive to latency than real time replication.

The open target application stores its data in a raw logical volume, which can be on any block storage the Linux or Windows server can access, be it SAN or DAS. With this open target, users can have the performance of an all-solid-state array and the simplicity of array-based replication without having to invest in a second, expensive all-solid-state array for their DR site. The disk system at the DR site will, of course, be slower, but many organizations may find that acceptable when disaster strikes.



Related Reading



Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 

Editor's Choice

Research: 2014 State of Server Technology

Research: 2014 State of Server Technology

Buying power and influence are rapidly shifting to service providers. Where does that leave enterprise IT? Not at the cutting edge, thatís for sure: Only 19% are increasing both the number and capability of servers, budgets are level or down for 60% and just 12% are using new micro technology.
Get full survey results now! »

Vendor Turf Wars

Vendor Turf Wars

The enterprise tech market used to be an orderly place, where vendors had clearly defined markets. No more. Driven both by increasing complexity and Wall Street demands for growth, big vendors are duking it out for primacy -- and refusing to work together for IT's benefit. Must we now pick a side, or is neutrality an option?
Get the Digital Issue »

WEBCAST: Software Defined Networking (SDN) First Steps

WEBCAST: Software Defined Networking (SDN) First Steps


Software defined networking encompasses several emerging technologies that bring programmable interfaces to data center networks and promise to make networks more observable and automated, as well as better suited to the specific needs of large virtualized data centers. Attend this webcast to learn the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging.
Register Today »

Related Content

From Our Sponsor

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Cost

Business executives are challenging their IT staffs to convert data centers from cost centers into producers of business value. Data centers can make a significant impact to the bottom line by enabling the business to respond more quickly to market demands. This paper demonstrates, through a series of examples, how data center infrastructure management software tools can simplify operational processes, cut costs, and speed up information delivery.

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency

Both hot-air and cold-air containment can improve the predictability and efficiency of traditional data center cooling systems. While both approaches minimize the mixing of hot and cold air, there are practical differences in implementation and operation that have significant consequences on work environment conditions, PUE, and economizer mode hours. The choice of hot-aisle containment over cold-aisle containment can save 43% in annual cooling system energy cost, corresponding to a 15% reduction in annualized PUE. This paper examines both methodologies and highlights the reasons why hot-aisle containment emerges as the preferred best practice for new data centers.

Monitoring Physical Threats in the Data Center

Monitoring Physical Threats in the Data Center

Traditional methodologies for monitoring the data center environment are no longer sufficient. With technologies such as blade servers driving up cooling demands and regulations such as Sarbanes-Oxley driving up data security requirements, the physical environment in the data center must be watched more closely. While well understood protocols exist for monitoring physical devices such as UPS systems, computer room air conditioners, and fire suppression systems, there is a class of distributed monitoring points that is often ignored. This paper describes this class of threats, suggests approaches to deploying monitoring devices, and provides best practices in leveraging the collected data to reduce downtime.

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Cooling Strategies for Ultra-High Density Racks and Blade Servers

Rack power of 10 kW per rack or more can result from the deployment of high density information technology equipment such as blade servers. This creates difficult cooling challenges in a data center environment where the industry average rack power consumption is under 2 kW. Five strategies for deploying ultra-high power racks are described, covering practical solutions for both new and existing data centers.

Power and Cooling Capacity Management for Data Centers

Power and Cooling Capacity Management for Data Centers

High density IT equipment stresses the power density capability of modern data centers. Installation and unmanaged proliferation of this equipment can lead to unexpected problems with power and cooling infrastructure including overheating, overloads, and loss of redundancy. The ability to measure and predict power and cooling capability at the rack enclosure level is required to ensure predictable performance and optimize use of the physical infrastructure resource. This paper describes the principles for achieving power and cooling capacity management.