David Hill

Network Computing Blogger


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

SEPATON: Playing A Key Role In Enterprise-Class Disaster Recovery

One of the significant technologies for improving data protection for basic risk management that has emerged and evolved during the past decade is disk-based backup. At the enterprise-level especially, disk-based backup is typically represented by virtual tape libraries (VTLs).While smaller companies have been able to use disk-to-disk and some limited-capacity VTLs, only a few VTL technologies provide the performance and scalability that enterprise organizations need. One of the companies to leverage this enterprise market demand that has become an established market leader in the VTL space is SEPATON.

SEPATON has made its mark with S2100-ES2 VTL, a data protection appliance designed with enterprise-class scalability and performance in mind. On top of that, SEPATON's core VTL technology is designed to enable the addition of software modules that are fully integrated into its operation. One such module provides a strong data deduplication capability, another provides bandwidth-optimized remote replication.

The initial use of VTLs was primarily in operational recovery, or recovery that occurs at a local site. Although that process could be required due to a physical failure (such as two nearly simultaneous disk failures in a RAID-protected disk array), the more likely trigger event is a logical failure, such as one due to a hacker, virus, database corruption or simply an inadvertent deletion of a file. In addition to its main purpose as an operational recovery technology, a VTL provides some additional benefits, such as reducing the time of a backup window.

VTL vendors have now turned their attention to providing broader disaster recovery support and SEPATON is no exception to that trend. Essentially, data at a local site also has to be available at a remote location designated as a disaster recovery (DR) site. The DR site springs into action when the local site is unable to perform its basic functions for an extended period of time and assumes the responsibility for running production applications.

One of the traditional methods of protecting data at a DR site has been remote mirroring. This is useful for restarting critical applications at the DR site if downtime is critical for a business such as a Web-based retailer that needs to be up and running all the time. However, due to the high cost of remote mirroring, it is hard to justify for an application that really doesn't require such high availability. Even for those applications that might benefit, remote mirroring only offers protection from physical problems, as logical problems would be quickly propagated from the local site to the remote site. Every time they backup data, enterprises face the challenge of moving large volumes of data over their network to their remote site. For most organizations, remote replication is too slow and too costly to be feasible. As a result, most continue to backup data to physical tapes and truck them to an off-site location. This process is highly manual, risky and slow.


Page:  1 | 23  | Next Page »


Related Reading


More Insights


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Data Deduplication Reports

Research and Reports

Network Computing: April 2013



TechWeb Careers