Hardware Vs. Software: Storage Vendors Go Head-to-Head
The new IT buzz term du jour, software-defined infrastructure, can lead users to assume that storage must now be software based. However, we must be careful not to so tightly define storage within SDI that storage hardware vendors are thought to be playing at a disadvantage.
While I know firsthand that we are able to more fully automate IT by integration of data placement and management functions within a software stack, I see plenty of room in the storage market for both hardware- and software-based products.
During the past few weeks storage vendors have unveiled new strategies, product functions and funding that promise to yield overall greater value to storage administrators and businesses in both traditional infrastructure and SDI. Let's look at some of the recent developments:
• Storage giant EMC made moves on both the software and hardware fronts. With the release of its new VNX models and MCx software, EMC has further improved flash memory performance. EMC claims the new products can achieve four times the performance at a third of the cost of previous VNX models.
Conversely, from a software storage perspective, ViPR will enable EMC customers to view objects as files. While this capability is not new to the market, it represents a significant step forward for EMC in software-based storage.
In my opinion, EMC is placing bets on both storage hardware and storage software. Clearly, EMC brings the sheer weight of its massive development engineering resources to bear as it maintains traditional hardware product lines and delves into the new world of software-based storage.
• Pure Storage, a provider of an all-flash memory based storage array, made storage industry history by announcing an additional $150 million dollars in pre-IPO funding, raising its total funding to date to $245 million. Additionally, Frank Slootman, former CEO of mega-successful data dedupe company Data Domain--which was purchased by EMC--has joined the Pure Storage board of directors.
[IBM's new FlashCache Storage Accelerator uses software to boost flash performance. Read how the technology could signal a trend for the industry in "Speeding Up Flash Storage Via Software."]
Pure Storage is a good choice for deployment within the traditional IT infrastructure or within the emerging software-defined infrastructure. The company has proven staying power in the market and proven speed of I/O, and it's focused on replacing spinning disk with flash-based arrays. Applications, processors and networks in a software-defined infrastructure can leverage Pure Storage flash arrays.
• Red Hat announced a new version of its Storage Server line as well as a test drive of Red Hat Storage installed within Amazon Web Services. Red Hat’s basic value proposition is the ability to deploy storage software on commodity server and storage combinations, which then yield low-cost storage arrays for use as file and object servers along with support for Hadoop and OpenStack environments.
Further, as a pure storage software product, Red Hat Storage Server can be deployed within AWS. Its big value proposition within AWS is the ability for Red Hat customers to move file-based applications directly from the data center to an AWS environment without rewriting the application.
Version 2.1 of Red Hat Storage Server includes improvements to its network-based asynchronous data replication product Geo-Replication and further interoperability with Windows, including full support for SMB 2.0 and Active Directory support. In addition, Storage Server has been integrated into Red Hat’s Satellite management software, thereby enabling easier installation and deployment of the storage software. Red Hat Storage Server can be used in a typical IT infrastructure, as well as with SDI, but Red Hat clearly is placing a big bet on SDI and staking out its ground early in the game.
It’s my belief that all of these storage vendors fit within both traditional infrastructure and SDI. It’s important to realize that no storage product today fits directly within an application server stack within a SDI. However, some companies are developing data replication within the stack to move the function closer to the data and dismiss the storage platform from this task. I’ll have more to say on these companies in upcoming blogs.
Have you deployed any of these storage products within your infrastructure? How is it going? Are you building a software-defined infrastructure? Have you been able to automate critical functions? We’re eager to hear your thoughts and comments.
[Get insight into how flash-based SSDs work and the various ways they can be deployed in "SSDs In The Data Center" at Interop New York Oct. 30-Sept. 4. Register today!]
Recommended For You
There isn’t a standard way of performing an application baselining or profiling. Here is a how-to video with suggestions on how to work through the process.
Hybrid and edge data centers are expanding the role of the traditional data center. This makes DCIM more important today. As with any management software, organizations need to know when it makes sense to keep it on-premise versus going with cloud-based DCIM.
The Interop 2019 speaker discusses ways that enterprises explore DevOps, the skills gap, and the rise of security as code.
Composable infrastructure provides a cloud-like experience for provisioning resources. Understand how it works and how it differs from Infrastructure as Code.
Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex.
Video overview on how to use a portable WAN emulator to validate bandwidth requirements to a backup server.