Enterprise storage and flash arrays are sold to customers today with a built-in and continuous hardware refresh process. These products tend to lock users into a set of technology for three to five years, depending on the organization and volume of data growth. Storage needs to be refreshed. As better, faster and cheaper arrays come along, customers end up paying more for maintenance charges, which are all too often overpriced.
During the period prior to the refresh, a customer has no ability to leverage the latest and greatest hardware improvements. At the end of that time, when customers need to upgrade the hardware, they are forced to plan and execute a complex data migration effort, which often involves application downtime or performance impact to move data.
While this process has been pervasive for the past decade, it comes with significant cost and risk. In most cases, customers pay a majority -- if not all -- of the expense, while the vendor simply incorporates the migration charge into the $/GB cost of the storage. This introduces cost and complexity for IT departments, including:
Cost of hardware. Flash and density is advancing at a rate of 30% to 50% a year. If consumers are locked into the hardware for three years, they will usually spend more money in the long run on upgrades versus architectures that can immediately deploy the latest and greatest technologies.
Cost of migration. The cost of data migration is an obvious, but critical issue for businesses looking to upgrade their data storage solutions. Moreover, IT organizations often don’t put enough weight into these decisions. There are plenty of estimate resources available online to provide an idea of what the costs really are, but few companies use them.
Business risk. Data migration introduces the risk of unplanned outages, performance issues, and even a loss of data.
The old approach is not beneficial to the customer’s wallet or data integrity, driving the need for a drastic change in storage consumption. In practice, customers need to take a leading role in defining their storage and data center future. A storage system should implement a modern and dynamic scale-out technology, as these architectures are designed to allow customers to align procurement patterns to needs of either capacity or performance over time.
The architecture facilitates investment protection by allowing customers to mix-and-match within the array, taking advantage of new technologies as they become mature and cost-efficient. At the same time, customers regularly need to decommission parts of the array that have depreciated beyond repair or can be replaced with newer, more cost-efficient configurations.
Equally as important is the ability for customers to expand their storage without downtime and without forklift upgrades. They also need to be able to decommission hardware when the time is exactly right for the business. Customers are protected down the road; instead of extending support for outdated compute and capacity, they can seamlessly scale to a newer generation of hardware and gracefully decommission the old.
Scale-out systems that support mixed configurations allow customers to take advantage of new technology while maintaining their investment in the “old,” which might be in production for less than a year. This scenario is becoming even more important as flash capacity and density improves at a rate faster than seen with Moore’s Law, or capacity doubling over a two-year period.
With the introduction of these new architectures, customers can now perpetually manage a single storage system while quickly deploying new technologies. This approach eliminates the need for fork-list upgrades and painful data migrations each time a new platform is introduced.
It is definitely an interesting time for the storage market as the entire industry focuses on providing optimal adaptability.
Shachar Fienblit is the chief technology officer of Kaminario, a global all-flash storage company based in Needham, Massachusetts.