Disaster Recovery Services

Although five vendors vied to help our fictional retailer, Darwin Groceries, fortify its disaster recovery strategy, Fujitsu Softek was the one we dropped into our shopping cart.

January 16, 2004

25 Min Read
Network Computing logo

On behalf of Darwin's, Network Computing invited leading data-protection product and service vendors to bid on the job. Of the 40 companies contacted, ranging from disk and tape hardware vendors to backup software vendors to disaster-recovery facilities, only six responded: Computer Associates International, Fujitsu Softek, Hewlett-Packard, Quantum Corp., Tacit Networks and Veritas Software. Of the many that opted out, a few offered reasons for doing so, including a lack of available resources to complete a response, fear that a "proprietary solution" would be rejected and product-release dates incompatible with the RFP deadline. Most vendors, unfortunately, offered no explanation at all, or initially indicated an interest to participate but never delivered a response (for a look at SunGard Availability Services and LiveVault Corp., see "Replication at Your Service,").

Of those that came through, Tacit Networks had to be disqualified because its response provided no mechanism for replicating databases. Indeed, its system applies only to file replication. Be that as it may, we've included a summary of Tacit's offering because it provides a unique and potentially powerful capability for those seeking file-system-based data replication (see "Tacit's Approach? We Approve,".

Vendors at a Glanceclick to enlarge

The responses covered three categories of data-replication approaches: replication at the hardware level (HP and Quantum), replication via host software (CA, HP, Fujitsu Softek, Quantum and Veritas) and replication in a network (Tacit). One method that was not examined was replication within the application. Oracle is steadily building out its database-replication capabilities--with Parallel Server, for example--but we think it's testimony to the immaturity of the database vendor's approach that the subject of database parallelism didn't even come up in the proposals received. Or, it may reflect a common-sense view of replication: Making copies at the application layer would likely require one replication product per application, whereas performing copy operations at the storage layer can be done by a single set of tools.

In the end, our analysis of the proposals provided a matrix of options rather than a "one size fits all" solution. If you're looking to answer one of the most difficult and important IT questions of the day--how to protect one of your company's most irreplaceable assets, its data--we recommend you gather as many proposals as possible to facilitate decision-making. While getting a number of quotes is always important, of course, the responses we received illustrated the wide range of possibilities and price ranges (see the complete responses).

This time out, Fujitsu Softek gets our Editor's Choice nod. No doubt, every solution provided (except Tacit's) could have met the needs of Darwin's. But Softek (and its fellow software-company contenders) avoided the forklift upgrade to existing infrastructure right off the bat.Softek's straightforward proposal was a welcome relief from the rivals' somewhat obfuscated responses. It proposed a three-phase approach to solving the replication problem Darwin's faced.

Phase 1 required the licensing of Softek's Replicator product for servers at headquarters, plus the purchase of 40 TB to 50 TB of direct-attached storage and servers to act as replication targets. Nice thing about the Softek bid was that it left the choice of servers and storage open: Expensive Tier 1, inexpensive Tier 2 or just about anything Darwin's might have locked away in its closets would do the trick. Softek also proposed to preserve Darwin's existing investment in high-end tape libraries for use in disaster recovery until a fully functional DR site could be established and made operational in Phase 2. In short, a centralized local replication with dump to tape was specified in the short term.

In Phase 2, Softek proposed that the Replicator system be extended across a WAN to a SuperGigantic store, built out to serve as a DR backup site. This was an interesting idea, if a bit naive: Seeing as Softek's parent company, Fujitsu, sells POS (point of sale) systems, it should have known that the only thing more expensive than data centers is retail space. The profitability of a retail or grocery store is directly proportional to the amount of square feet allocated to merchandise, so one could foresee store personnel beginning to fill IT recovery-center space with overstocked bananas and canned meat products.

Wherever the recovery site was located, some direct-attached arrays and servers would be installed in Phase 2. Softek proposed that its Storage Manager software then be added, both to scrub the data before replicating it through the enterprise and to manage the replication processes together with other aspects of the storage environment.

In Phase 3, Darwin's would deploy a SAN and use Replicator to migrate data from direct-attached storage to the new topology. The company would implement Softek SANView to manage the SAN, Softek Provisioner to perform heterogeneous LUN management and, optionally, Softek EnView to manage QoS (Quality of Service) and service-level compliance.

The writers at Softek seemed to get the message that Darwin's wanted to keep costs low while replicating a sizable amount of data locally and remotely. Softek's Replicator leverages existing infrastructure and low-cost build-out options, and supports existing IP networks between locations. Vendor-agnostic, the solution precludes any hardware lock-in.

The only missing ingredient was a provision for security in stored data. This was more than compensated for, however, by the detailed and eminently forthright discussion of the need for data hygiene and testing methodology. Rolling out the proposed solution seemed extraordinarily simple. Two days of professional services would probably be required for Phase 1, at a cost of $5,000. The Replicator Server implementation would come out to $11,500 for a four-processor server license and $1,170 per year in maintenance. The cost of additional local disk and server hardware was not addressed.

Phase 2 adds to the replication strategy a cost of $39,000, plus $795 per managed server. This amounts to $48,540 for servers in the HQ data center and DR facility, plus $126,405 to instrument all store servers into the managed pool, for a total cost of $174,945. Not included is the cost of storage and server hardware at the DR site.

Phase 3 is when the infrastructure forklift occurs and a SAN is implemented. As with the other bids, most of this infrastructure-upgrade cost is undocumented. Softek's software components total $50,000 for two Storage Provisioner "engines," and $19,500 for EnView. SANView pricing was not provided--about the only deficit in the bid.

Fujitsu Softek, (877) 887-4562, (408) 746-7638. www.softek.comVeritas Software, which was identified in our e-poll as the perceived leader in data-protection technology, with 70 percent of 549 votes, seemed a likely contender. In fact, we thought the company's longstanding offering for Oracle database replication, combined with the broad, platform- and OS-agnostic breadth of its solution, would make it a shoo-in for Editor's Choice. However, we found the Veritas proposal sparse and, in some ways, nonresponsive to the needs of Darwin's; we suspect that this is the result of a bit of "laurel resting" by the authors.

Veritas proposed its Volume Manager, FlashSnap and Volume Replicator product trio to segregate, capture point-in-time copies and forward data across an IP interconnect to the remote facility Darwin's uses. In the next breath, it added the recommendation that Cluster Server and Global Cluster Manager be considered to enable the failover of applications to the remote site.

On the plus side, this proposed solution boasts less than one minute of data delta in the 80-mile, cross-site replication scenario. Other strengths of the proposal included its fit with existing infrastructure (asserted, but not substantiated through a cross-reference to a supported equipment list), support for virtually any WAN connection (Fibre Channel not required) and its flexibility in terms of target host platforms.

Although there was nothing inherently problematic about Veritas' proposal, it was weak on actual implementation instructions. We were simply provided with descriptions of software and a terse overview of its operating philosophy. Left to the imagination was any sort of description of how and where the components were to be implemented in the existing infrastructure of Darwin's, something that a newbie to the Veritas approach might need to have spelled out in greater detail. Also notably absent from the Veritas bid was any mention of how it would meet requirements for data security, data hygiene and capability testing.

We did the math without instruction as to the actual deployment of the recommended software components and came up with $191,235 in software licenses and implementation-support services (11x$17,385) for the headquarters site alone. Additionally, an annual 24/7 technical-support contract plus training services came to $24,854 in Year 1. So, not even considering the licenses, implementation support and tech support for servers at the remote site, the price tag was already $216,089.

Veritas also recommended that Darwin's deploy its Cluster Server to provide application failover. Clustering was optional in the Veritas bid, and we were given no guidance on whether some or all of our servers would require the Cluster Server package, so our math may be sketchy. According to the vendor, the Cluster Server would cost Darwin's $8,995 per server, installed, plus an annual tech-support contract for all clustering technologies for $1,033 per server. Global Cluster Manager, with installation, priced out at $12,290 per site. Thus, the Veritas bid grew by another $134,888, excluding hardware and Cluster Server software costs for implementation at the recovery site, to a total of $350,997. Still, without the additional cluster technology, Veritas' bid was in line with Softek's, so we didn't ding it.

The vendor promised to implement the core host software-based solution, excluding the clustering technologies, within three days. Overall, we got the sense that the little 56-TB data-replication problem confronting Darwin's had elicited scarcely more than a yawn from Big V.

Veritas Software, (866) 837-4827, (650) 527-8000. www.veritas.com

Because Computer Associates led off with a summary containing a partially flawed premise--that Darwin's wanted to replicate critical data from all remote offices to its central site--we worried that CA's lengthy proposal might be off on the wrong foot. We were pleasantly surprised when this proved not to be the case.

Emphasizing platform agnosticism, CA's pitch was for a managed replication-software solution leveraging the vendor's BrightStor Storage Resource Manager. Like those at Fujitsu Softek, the bright folks at CA recognized the need, first, to proscribe and filter the data to be replicated, then to instrument the infrastructure for the replication process. They also promised to preserve existing infrastructure investments on the part of Darwin's.

It was challenging to separate product branding from solution branding in the voluminous CA bid, but our sense was that the effort would begin with further needs assessment, which is always a good idea. We had hoped that the vendor would first turn its attention to data assessment that might help determine which data was important and the rate of data change. These two characteristics might provide Darwin's with guidance on the best way to duplicate its huge data repository over the long haul.

Unfortunately, the discussion turned immediately to storage-platform discovery and capacity-usage assessment. CA offered the unsubstantiated assertion that most companies tend to store their most important data behind a few key servers and that features of its BrightStor ARCserve backup product, such as Disaster Recovery Option, can be used to bring data access back to life rapidly in certain cases. An additional measure of resilience, according to CA, is to mirror data between cluster servers for high availability, a function it can support with BrightStor High Availability on Windows servers.

The proposal took a long time to get to the point CA was trying to make: that BrightStor High Availability can be used across an IP network (LAN or WAN) to provide replication and failover services between servers. Not until page 26 did the vendor begin discussing BrightStor SRM, BrightStor Portal and BrightStor Enterprise Backup, the other components of its proposal.

Specific answers to the RFP were short, frequently referencing material we sometimes hadn't absorbed in the preceding 40-odd pages of brochure extracts. Interestingly, CA's testing methodology recommends the use of "dedicated test servers at different locations" so that the replication and failover services can be interrupted and failover can be demonstrated without causing interruption in normal processing. This "testing by proxy" method is problematic at best because it doesn't verify that primary or production servers are capable of the same result. We'd prefer a tool or function that would let us see the replication service in operation, and to confirm, through heartbeats or some other indicator, that the link to the failover server is good and is standing by to carry the load. We didn't ding CA for this response, however, because it at least tried to tackle the question when many rivals did not.

To be honest, given CA's extensive résumé in enterprise management, we expected more detail and capability description in its response to the issue of how proper operation of the solution would be verified. Instead, it offered only that breakdowns would produce alerts.

Pricing was (mercifully) done for us: $467,000 plus $250,000 in implementation fees, for a total of $717,000. Although no third-party costs were specified, recommendations at various points in the document suggested that servers be dedicated to management-software deployment. These hardware costs, plus the costs for any "proxy" testing servers and for servers and storage devices at the remote site, could add up.

Computer Associates International, (800) 225-5224, (631) 342-6800. www.ca.comQuantum's response came from its Storage Solutions Group and specified both hardware and software components. Quantum assumed many facts not in evidence about the environment at Darwin's, including pre-existing use of Veritas' NetBackup 4.5 to conduct backups over the LAN to existing tape libraries shown in the diagram (see our scenario, below). The vendor further stated that the NAS filers within the grocery chain's empire of storefronts were "Network Appliance 700- and 800-series filers," and assumed that additional filers were deployed within the headquarters data center because of the mention of NDMP over Gigabit Ethernet. Quantum also offered that the operating systems installed on Darwin's database servers were unspecified and recommended that the company deploy, if it had not done so already, Sun Solaris to support transaction-heavy workloads. Um, OK.

Having stated these assumptions, the folks at Quantum SSG proceeded to reinvent the infrastructure at Darwin's. They recommended consolidating storage platforms behind the Store Accounting Systems onto a single Hitachi Lightning 9900 array, and establishing a primordial Fibre Channel fabric by deploying a Brocade Communications Corp. Fibre Channel switch between the Lightning and its servers. Over time, they said, other storage platforms for the inventory and data-mining applications would also be weaved into the fabric. Ultimately, they claimed, this strategy would enable the replication of data over Fibre Channel rather than IP.

Quantum further suggested deploying a secondary site (a recovery center) with an interconnecting VPN to encrypt data in transit. Then, to solve the problem of replication, it recommended the same software set as Veritas, including the Cluster Server. According to Quantum, the clustering software would provide assurance that application servers at the remote location were tested and ready to receive load in the event of an outage, an observation Veritas never explicitly made.

Assuming that Darwin's would swallow all these infrastructure changes as recommended, Quantum offered its own 64-TB DX100 enhanced backup system to replace the tape libraries in use. This would serve as a "secondary restore device" for use when and if Veritas' Volume Replicator software became corrupt, and also as a staging area for archival backup, which would henceforth be made to a Quantum MAKO PX720 library populated with 12 SDLT 6000 tape drives. The DX100 could be used to provide a fast disk-to-disk restore of data, either as a separate process or in conjunction with Volume Replicator's checkpoint restore feature.

Interestingly, Quantum saw the DLT-MAKO combination not only as a backup to Veritas' Volume Replicator and archival medium for headquarters applications, but also as a means to back up data presumed to be flowing from NAS appliances in the remote stores to NAS appliances in the HQ data center. Some tape drives in the PX720 would be allocated to this purpose and attached via NDMP over Gigabit Ethernet, while the remaining eight drives would be interfaced to the Fibre Channel fabric via DinoStor bridges and virtualized via NetBackup's SSO (Shared Storage Option).

Quantum estimated that this infrastructure rearchitecture would take nine months--three for planning and procurement, the rest for the forklift upgrade. It didn't price the overall solution, only the cost for its DX100 and MAKO library configurations, which totaled $905,705, not including service and support. Including our estimate for software in the Veritas bid, the total cost exceeded $1,256,702--and this, of course, was sans SAN, HDS array and Brocade switches!

Also missing from the bid was a deeper sense of the meaning of, and provisioning for, storage security. Quantum emphasized link encryption and segregation of traffic between the remote stores and the headquarters facility using VPNs, but it didn't mention how data would be secured once written to archival tape or backup disk.

Quantum Corp., (800) 677-6268, (408) 944-4000. www.quantum.com

Like that of Quantum, HP's proposal required the implementation of new gear--specifically, a matched set of EVA 5000 arrays (one local, the other remote), the company's CASA (Continuous Access Storage Appliance) and a SAN. The solution provided for Darwin's to replicate locally, and we quote, "only the critical data on Darwin's heterogeneous storage to a high-performance EVA 5000 array" using the CASA. Then, the CASA would provide synchronous replication of the data, via a Fibre Channel-over-IP tunnel established through the IP WAN to the remote EVA array. This is a classic multihop mirroring solution, using the CASA as the intermediary intelligence and fault tolerance. The vendor was further prepared to support implementation with its HP Disaster Tolerant Management service offering.

The primary solution proposed by HP included provisions for link-level security, but not encryption of the storage itself. Well-defined procedures were mapped out for testing failover--and actually failing over--between the arrays. Moreover, HP provided a detailed list of supported storage infrastructure (arrays and operating systems), which was omitted from the Quantum bid.

A useful chart was provided to identify throughput and I/O-per-second characteristics of various WAN, LAN and SAN interconnects.

HP also provided an overview of its OpenView Storage Operations Manager. To emphasize the need, industry-buzz terms such as SMI-S (Storage Management Initiative Specification) were much in evidence, but these contributed little differentiation from the management solutions HP expected to be proposed in other bids.

Understanding that this was a high-dollar proposal, HP dwelled at some length on IT and business benefits--a nice touch, and one not emulated in any of the other proposals. The vendor claimed that the total projected value of the system would be approximately $23 million over five years. It claimed a whopping 354 percent ROI and payback within 14 months. Despite the considerable effort HP put into this analysis, we couldn't help but feel that, predicated as it was on the largely unsubstantiated cost of a disaster at Darwin's, the numbers were unlikely to sway the grocery chain's management. That's because missing from the equation was any analysis of the likelihood of a disaster actually occurring, which is a mitigator of any analysis of exposure or value. Neither HP's proposal nor those of its competitors offered anything approaching an exposure analysis that might underscore what Darwin's stood to lose in an outage.

Price for the HP proposal was a bit difficult to ferret out from the morass of archived attachments included in the bid. Based on document titles, it appeared that the sticker price of $466,944 for CASA and $5,609,108 for the two EVA arrays applied, for a total of $6,076,052. The bid also stated that SAN Valley FCIP gateway switches were required, and pointed to a spreadsheet that listed several models and numerous configuration and pricing options. We couldn't discern which products and options were needed in our case. The overall price tag also didn't include the SAN-ification of existing infrastructure or the additional cost for OpenView management software, no small potatoes.

The considerate folks at HP did proffer a lower-cost solution, early on in the proposal. The company suggested that if Darwin's lacked the bucks to do the best job of protecting its mission-critical assets, an alternative existed in the form of HP OpenView SM (Storage Mirroring), a host-based application that performs remote copy over an IP LAN or WAN. SM is hosted on a Windows 2000 server and offers asynchronous replication at the LUN, file or byte level. Requiring no investment in Fibre Channel networks, it provides high-capacity replication and zero downtime functionality in a low-bandwidth, low-storage-volume change environment.

At $4,495 per seat with a minimum of two seats (for the local and remote sites), this provided a bargain-basement approach that we would have liked more detail on. However, HP didn't provide adequate documentation of the alternative low-price proposal to enable us to evaluate its suitability before press time. It might have been the better choice, or it may not have satisfied our criteria. Otherwise, the HP bid was very complete.

Hewlett-Packard Co., (800) 752-0900, (650) 857-1501. www.hp.com

Jon William Toigo is CEO of storage consultancy Toigo Partners International and author of 13 books, including Disaster Recovery Planning: Preparing for the Unthinkable (Pearson Education, 2002). Write to him at [email protected].

Post a comment or question on this story.We dispatched an RFP seeking a comprehensive data-protection solution for fictional retailer Darwin's Groceries. Darwin's has reason to worry: As the company expanded from its minimart roots, it launched a chain of SuperGigantic stores, many of which wreaked havoc with local small businesses. New stores are being met with sometimes-violent demonstrations, and management fears that "anti-Darwin's malcontents" will figure out that the company's smart use of technology is a key reason it can keep prices so low and will decide to target its technology infrastructure.

Examples of Darwin's data-collection tactics: Cash registers forward purchase information back to a central server in each store, and wireless handheld computers with scanning wands are used by staff to update stock reports. Nightly, information collected at stores is transmitted to a centralized data-storage platform at headquarters. Once data is collected at HQ, it's replicated and directed both to a process that updates store-management and inventory-control systems and to the company's tape-backup process. Then, some data is abstracted for use in a data warehouse that helps Darwin's spot trends to boost revenue and reduce costs.

Darwin's Data Foodchainclick to enlarge

Darwin's wants to ensure that its data is well-protected during collection, transport and storage. It's also seeking to improve its storage infrastructure and management capabilities to enable storage and data protection to scale nondisruptively, and to provide better information on the status of storage-related replication and backup processes. The company is interested in exploring disk-to-disk data-replication strategies but has thus far been unable to find a vendor that can support its heterogeneous storage infrastructure. Ultimately, the company would like to use tape for archive- and disk-based replication for disaster recovery.

The company would also like to consider cost-effective methods for replicating its headquarters infrastructure at an alternate site so that business will continue uninterrupted in the event of a fire or an interruption in network services.

In our RFP, we asked for a system that could:

• Replicate mission-critical data reliably and securely across a WAN so that the remote data copy is synchronized to within five minutes of the original and is available for use by applications within 30 minutes of an interruption of normal processing operations.

• Host replicated data on storage platforms or topologies in the production environment that don't replicate on

a one-for-one basis, enabling greater flexibility and lowering costs.

• Monitor performance of the replication strategy.

• Test the replication strategy without disrupting normal application or storage operations.

• Secure data during the replication process and after failover of application access to the replicated data set.

• Scale readily in response to increases or decreases in the volume of data to be replicated.

• Cull from replicated data duplicate and/or noncritical data, as well as data or files containing virus signatures or other malicious software code.

• Automate techniques for optimizing data transfers across WAN interconnects of varying bandwidth and for optimizing WAN interconnects for best possible cost efficiency.





Vital Stats

• Darwin's operates 150 minimarts, some of them not so mini, and 10 SuperGigantic stores. Smaller minimarts use VPN connections over the Internet or direct dial-up modem connections to transfer data to HQ. Larger marts and the SuperGigantic locations have dedicated high-bandwidth connections, plus VPNs as a backup.

• All storefronts have centralized servers with NAS arrays for storage. NAS platforms in minimarts have a capacity of about 0.5 TB; in the SuperGigantic sites, capacity is about 1 TB.

• Change data is transferred nightly. On weekdays, about 750 GB of data is transferred; on weekends, about 1.5 TB of data passes between the stores and headquarters. There is no data backup in individual stores.• Store-accounting systems are Oracle databases, spread over three large servers with direct Fibre Channel attached arrays, each with
10 TB of capacity.
• Inventory-management systems are hosted on servers with SCSI-attached 8-TB XIOtech arrays. This data is considered critical.
• Data warehousing and data mining are performed in a workstation cluster sharing a common 10-TB HDS array. This data is also deemed critical.
• Data backup is conducted via Gigabit Ethernet and NDMP to three high-end tape libraries that are providing a barely acceptable 2-TB to 4-TB per hour backup speed. Darwin wants to migrate away from tape and into disk-based data replication, preferably platform-agnostic.If setting up an in-house disaster-recovery plan is too intimidating, there are a number of providers ready and willing to do it for you. We spoke with representatives of two such providers, SunGard Availability Services and LiveVault Corp. Not surprisingly, John Lindeman, vice president of product development at SunGard, says most companies now find it exceedingly difficult to assess their needs and roll out disaster-recovery programs without outside help.

"Missing in most of these analyses is a realistic recognition of the scope and depth of the problem," Lindeman says. "Change management is a major problem. Most people do not understand what is involved in running an availability solution and how to ensure that all aspects are being addressed, or that a necessary quality of service will be provided when required."

In the past, Lindeman observes, most data replication came in the form of stovepipe systems with proprietary vendor lock-ins. "Today, given the heterogeneity of open-systems environments, finding a 'one size fits all' solution is difficult," he says. "IT is required to deliver support for an increasing proliferation of business services."

As for costs, Lindeman says SunGard's managed services rival the TCO advantages of do-it-yourself approaches. The company offers not only the infrastructure for data protection, but the resources and skills, plus the reporting structures that enable clients to view trends affecting data-protection provisioning and to validate mirroring and tape-vaulting operations on demand.

The complexities and lack of resources Lindeman cites are all the more daunting in small and midsize businesses, says LiveVault CEO Bob Cramer. LiveVault, in which information management services provider Iron Mountain holds a 12 percent stake, is garnering more than 200 customers per quarter, he adds.

A major driver, Cramer notes, is that the failure rate of tape, pegged by Gartner at 40 percent to 50 percent, increases significantly when you move outside Class A data centers, such as those operated by Fortune 500 companies. LiveVault's alternative is to provide an Internet link to a secure off-site facility where data can be replicated in a controlled setting. LiveVault is activated by loading software onto the dominant server in the customer's LAN, then providing the server with a secure link to the LiveVault facility.

The company offers a guarantee against failure (refund of subscription costs), as well as full visibility and management of the system from the customer site. LiveVault's service, which costs $2,950 per year per dominant server, is competitive with the price tag of homegrown strategies, Cramer says, adding that the increasing availability of high-bandwidth, last-mile links to the Internet is a big driver of his business. Although Tacit Networks' response to our disaster-recovery RFP fell short of addressing the replication needs of Darwin's databases, we were impressed with Tacit's unique approach to data replication for file systems. It's the kind of solution that leading vendors in the database market have sought to deliver for several years--with limited success--and we think it's worth a look.

Tacit approaches data replication from the standpoint of "online data coherency"--in other words, replicating "writes" at one data center immediately at another. The company uses advanced streaming, differencing and compression techniques to transfer updates quickly to large data sets between sites.

A set of specialized networked appliances provides the means to share one global instance of a data set to multiple users.

The Tacit solution works with flat files and doesn't run application software. In fact, it won't work with database files unless they're first converted to a flat-file format. The solution uses any network interconnect that supports NFS (Network File System) or CIFS (Common Internet File System), and leverages specialized appliances that enable WAN file sharing in real time, so that the need for costly replication (defined as the sharing of point-in-time copies) is eliminated completely. A single instance of a data file is instantaneously updated, globally and across the entire enterprise.

Touting its solution as a "groundbreaking answer" to alternatives from EMC, Hewlett-Packard, Network Appliance and other storage vendors, Tacit insists the offering is a new model that shouldn't be compared with replication and copyediting, or with bandwidth optimization.

Tacit Networks, (866) 822-4869, (908) 757-9950. www.tacitnetworks.com

R E V I E W

Disaster Recovery Services



Sorry,
your browser
is not Java
enabled




Welcome to

NETWORK COMPUTING's Interactive Report Card, v2. To launch it, click on the Interactive Report Card ® icon

above. The program components take a few moments to load.

Once launched, enter your own product feature weights and click the Recalc button. The Interactive Report Card ® will re-sort (and re-grade!) the products based on the new category weights you entered.

Click here for more information about our Interactive Report Card ®.


SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights