George Crump

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Detailing Deduplication's Replication Mode - FalconStor

I've spent the last few weeks re-interviewing various deduplication vendors. While most conversations with them are a broad discussion about deduplication, this round of calls we focused in on the replication process. Over the next 15 weeks we will provide some details on those conversations. It would be easier for me to avoid naming names and to just speak in generalities, but I don't think that does the actual users of the technology much good. So we will name names and will let the comments sort themselves out. The first wave of re-interviews focus on the replication component of deduplication.

Our first interview was with FalconStor. The way you perform deduplication will affect the way you replicate that data. FalconStor uses what they call concurrent deduplication. Basically data is streamed to disk first and then the deduplication process occurs, but they can begin that process as each backup stream closes.

FalconStor's solution can deduplicate up to eight jobs at a time as a default setting and that number can be adjusted up or down by the backup administrator, depending on the processing capabilities of the backup appliance and the rate that data can be delivered to the appliance. They also allow for policy based settings of deduplication; certain jobs can be concurrent, others can be entirely post process and others can not be deduplicated at all. When the deduplication process does start the moment a unique segment of data is identified, that data is stored locally and then replicated across the WAN to another FalconStor device at the remote location.

FalconStor claims 150:1 fan in ratio, so you could have a remote office backup local and then replicate into a singe large appliance. They are offering what I call WAN optimized deduplication. For example, you have Site A and B protecting data locally and then replicating to a DR site. First Site A sends data to the DR Site and then later Site B sends data, but some of the data in Site B is the same as what Site A had already sent. WAN optimized deduplication will tell Site B not to send the data the DR Site already has. It is important to note that this is a single communication between Site B and the DR Site. Site B does not check with all the other Sites, just the DR Site. In addition to WAN optimized deduplication, FalconStor has a QoS like functionality built in to the software in order to only use a certain percentage of available bandwidth. The rate of utilization can be adjusted based on time of day or amount of available bandwidth.

Finally for FalconStor's OST support (Symantec NetBackup's API for advanced backup devices), allows you to control much of the deduplication and replication process through the NetBackup Interface. This allows a NetBackup shop to reduce the number of steps involved in managing the deduplication and replication process.

Page:  1 | 2  | Next Page »

Related Reading

More Insights

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Data Deduplication Reports

Research and Reports

Network Computing: April 2013

TechWeb Careers