Storage

02:00 PM
George Crump
George Crump
Commentary
50%
50%

SAN Cloud Storage

As we discussed in my last entry, the use of cloud storage for primary data has become more practical, especially for NAS-based use. It is going to require more work but there are some pocket use cases already. What about the other extreme: using cloud storage for SAN based data? The first challenge is probably why would you want to do that, anyway?


As we discussed in my last entry, the use of cloud storage for primary data has become more practical, especially for NAS-based use. It is going to require more work but there are some pocket use cases already. What about the other extreme: using cloud storage for SAN based data? The first challenge is probably why would you want to do that, anyway?

Assuming that you are using SAN storage or block-based data for your applications, why wouldn't you just host the entire application in the cloud, as well? This could be an application unique to your company, or one that you can't find hosted anywhere else. There could also be some desire to keep the compute piece internal. Finally it may be less expensive to host just the data in the cloud.

Whatever the motivation, if you are going to place application data in the cloud, you need to come up with a way to get it there quickly that won't impact application performance. As was the case with file-based data, much of the technology exists today for SAN-based data movement with automated tiering technology. The cloud would just be the extreme use case of the technology.

Essentially, a tier within the automated structure would be placed in your data center with a large FLASH SSD or mechanical hard drive. Then, as blocks of information aged, they would be moved to the less expensive cloud storage at the provider's facility. For many data centers, a few TBs of active blocks and a relatively high-speed connection would be all they need. This would reduce storage implementations to the equivalent of installing a broadband modem. While we are not there yet, we could be within the next few years, and this could be very viable, especially in the medium to small sized business. A box with a broadband connection and local cache storage could also provide compute power for a ready-to-go virtualized server environment.

The automated tiering technology would have to be able to move data at a block level or smaller. Ideally, you would want the automated tiering technology to also keep copies of whatever is being changed at the data center as time and bandwidth allows. The cloud provider could then provide all the data management, data protection and disaster recovery services that the small data center would need. This concept may be a little forward thinking now but the underlying components of the technology exists today. Nothing needs to be invented, just extended.

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, ... View Full Bio
Comment  | 
Print  | 
More Insights
Hot Topics
4
Do We Need 25 GbE & 50 GbE?
Jim O'Reilly, Consultant,  7/18/2014
White Papers
Register for Network Computing Newsletters
Cartoon
Current Issue
Video
Slideshows
Twitter Feed