As we discussed in our last entry, storage tiering will soon extend to tiering outside of the storage system. Active data will be pushed closer to the application, either on the host that supports the application or even directly into the virtual machine that hosts the application. The future of location-based storage tiering will be dependent on vendors being able to orchestrate that movement.
There are systems available now that accomplish this, to some extent, by running caching software on a host equipped with PCIe solid state storage (SSS) and separate automated tiering software on the storage system. The challenge today is that these systems are from different vendors and are not in sync with each other. It won't be long before a single vendor will take the next step and be able to not only move data between different tiers of storage within the storage system itself, but also within the entire infrastructure. Very active data may move to a PCIe-based solid-state device inside the server and then, as it ages, be move down to solid-state storage inside the storage system and then finally moved to the remaining hard disk tier.
A single vendor could then manage the process so well that even write activity could be cached locally for a short time. Essentially, all inbound write I/O traffic could be stored in the server-based PCIe solid-state storage device first, coalesced and then written across the network to the SSDs in the storage system. This would make the network more efficient as well as increase the life expectancy of the solid-state storage in the shared storage system.
Finally, the tiering could even extend outside the data center by pushing data to the cloud. It would be possible for a vendor with this technology to auto-tier data that has become inactive to an external cloud storage provider, further reducing internal costs and the data center footprint.
As we discuss in our article "The Storage Hypervisor", the data architecture of the future is likely to be much more fluid and the hypervisor may be the most qualified to control it all. It will be an architecture where data is pushed as close to the application as is possible while maintaining data availability. It will likely be a mix of internal solid-state storage devices placed inside the server and solid-state storage devices installed either in conjunction with or potentially replacing legacy hard drive storage systems.
The net effect for the storage administrator will be a much easier environment in which to manage performance. This is critical because, as the data center becomes more virtualized, the ability to fine tune performance for specific applications is going to become increasingly complex. Everything is just faster and data is moved automatically as close to the application as possible--performance tuning, at least from a storage perspective, will become a thing of the past.
Follow Storage Switzerland on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Storage Switzerland's disclosure statement.