Storage is getting complicated. What used to be a simple question of SAN versus NAS is now a fast-changing maelstrom of new ideas, new ways to solve storage problems, and new vendors. As Western Digital CEO Steve Milligan said at the company's recent product launch, "There’s never been a time where there has been so much change and disruption simultaneously."
The advent of SSD triggered a sea-change in storage. The major storage vendors gave way to a bunch of new companies that make only flash chips or drive-level products. WD was lucky enough to buy HGST and carve out a leadership place in enterprise SSD, but Seagate wasn’t quite as quick and languishes behind.
The result is two major hard drive vendors left in the game, and they are increasingly focused on bulk, high-capacity hard drives. Seagate recently announced 8 TB on a single drive, but was quickly upstaged by HGST, which unveiled 10 TB helium-filled drives with a much lower power profile.
WD also announced a high-density storage appliance that can enable a 10-petabyte storage rack. With the benefit of vertical integration through the drives, this is a major position play in the storage array/appliance space. The new appliance allows WD to make a credible bid for direct sales to cloud service providers (CSPs), which is something that eludes EMC and the other traditional array vendors.
Seagate isn’t being left behind in the appliance game. It's acquired Xyratex, a major fabricator of arrays and appliances, and has its own strategy to address CSPs and large enterprises. Seagate also takes cloud installations very seriously.
But forces other than new technologies also are changing the storage market. Chinese ODMs are feeling their way into the US end-user market, bolstered by large-volume sales to both the CSPs and US system OEMs. With a combination of open-source code such as Ceph and OpenStack, and third-party code providers that range from Microsoft’s Storage Server to Red Hat’s well-featured storage suite, ODMs provide functionality that isn't much different from that of traditional suppliers.
All of this means that a new set of providers will enter the market, undercutting the big-iron vendors by substantial amounts. Once these get through the teething problems of setting up support, they’ll be formidable contestants, because they already have-volume shipment logistics and cost management in place -- they aren’t startups!
Meanwhile, the average IT guy is trying to figure out “software defined” and “converged” and “hyper-converged.” There’s a lot of hype clouding the air. The reality is that behind the overblown and confusing terminology is a germ of an idea: Storage functions that perform high-speed, latency-sensitive transfers can be moved to virtual machine instances in the server farm.
Virtualization makes some sense, but it’s getting lost in talk of distributing drives across all the servers in VSANs. This sub-optimizes the servers, although the strategy of using PCIe flash cards in a server pool makes sense. There’s a caveat with the flash card approach, however. To optimize its performance, the application set needs to be tuned to co-reside with the data it’s crunching. If Hadoop MapReduce were tweaked to address that, clustered PCIe cards should be a real winner.
Another notable storage trend is that enterprise hard disk drives are being rapidly supplanted by SSD. In fact, Mike Cordano, WD’s president, said at the recent HGST product launch that WD has reduced R&D for that drive type. We are moving from an old tiering system of fast and slow HDD to one of ultra-fast SSD and bulk hard drives.
Finally, object storage is growing rapidly, in part because of unstructured data, and is expected to dominate storage in a few years. The technology is still evolving, with vendors like Caringo offering a full feature set. WD is investing in this area, too, reinforcing the view that it sees itself as a vertically integrated storage supplier in a few years’ time.
One can imagine a future where storage is essentially COTS-based, running on low-cost hardware platforms from a number of vendors, with software stacks that include open-source and proprietary technologies. In such a situation, the price of storage will have dropped a good bit from today’s level, while performance will be much higher. But the road to that nirvana is constantly twisting and changing.
Recommended For You
There isn’t a standard way of performing an application baselining or profiling. Here is a how-to video with suggestions on how to work through the process.
Hybrid and edge data centers are expanding the role of the traditional data center. This makes DCIM more important today. As with any management software, organizations need to know when it makes sense to keep it on-premise versus going with cloud-based DCIM.
The Interop 2019 speaker discusses ways that enterprises explore DevOps, the skills gap, and the rise of security as code.
Composable infrastructure provides a cloud-like experience for provisioning resources. Understand how it works and how it differs from Infrastructure as Code.
Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex.
Video overview on how to use a portable WAN emulator to validate bandwidth requirements to a backup server.