Pure SAS systems, systems that have SAS drives and SAS connectivity to servers, tend also to have very aggressive price points that will be attractive to small to mid-sized cloud providers. These systems can be configured and networked to a dozen or so high performance servers that could host 10 to 20 virtual machines each. As we discuss in our recent white paper "The SAS Primer", most SAS arrays have multiple ports for attaching servers built in. For greater server connectivity, all 6-Gbps SAS systems can add something called a SAS expander. As a default, any attached server can see all the volumes that have been provisioned. With server virtualization or any clustered file system, this is often something that you want--all servers seeing all storage. However, 6 Gbps also adds some basic functionality to limit what volumes can be seen by other servers. The result is similar to more traditional shared storage infrastructures, you can have public volumes for environments that understand how to manage shared access and you can have private volumes for environments that don't.
Pure SAS systems do have their shortcomings. They are often lacking in storage features that enterprises have come to count on like snapshots, replication, and management tools. The severity of this shortcoming, though, depends on the environment. Many virtual environments and operating systems now provide many of these storage services. If they are not complete or robust enough, these capabilities can be purchased as software add-ons from third-party storage software vendors. When these software systems are combined with SAS storage hardware, a very complete, feature-rich storage offering can be developed.
The smaller cloud provider can be a perfect candidate for a shared SAS system. First, they typically have the internal skill set to manage their way through the "some assembly required" reality of shared SAS storage. This includes implementing the hardware, configuring the add-on storage software, and configuring the SAS network. Second, small cloud providers have the motivation to invest the time in working through some manual processes to be able to save IT expenses.
The breaking point for pure SAS storage tends to be scale. Today's higher end Intel processors, when embedded into a SAS storage system, can support relatively high performance and drive count, but most don't have the in-unit or scale out functionality of the storage systems from business-class storage vendors. When the limits of a single unit are reached, you typically need to add another unit and be prepared to manage it separately. Depending on the growth pace of the business, that one unit may be all that is needed, in which case shared SAS is certainly worth a look. Additionally, cloud storage software can use SAS storage as its foundation and provide the services to manage multiple SAS storage systems in a single storage cluster.
For me the key difference is the "some assembly required" aspect of shared SAS and similar technologies. If that is something that your data center or cloud infrastructure has the skill set and, probably more important, time to support, then the value of a shared SAS environment can be significant. If not, then look for more business-class systems that can be internally scaled and have a higher degree of time saving storage automation.
Follow Storage Switzerland on Twitter