Your Next Storage Purchase: 5 Considerations
Selecting the right storage array can future proof your data center. Here are five factors to keep in mind as you evaluate storage systems.
December 2, 2014
Storage infrastructure effectively serves as the heart of the data center, providing a functional foundation for every business. An optimal design is one that works invisibly, acting as the unseen enabler of seamless, nonstop operations. In contrast, a suboptimal design can be the bane of IT and business operations, requiring constant care and, more often than not, increased funding.
Consider this: Your data will live longer than any software or hardware investment. Unlike applications or servers, data is resident and will continue to grow year over year, becoming increasingly difficult to manage, store, and migrate. Implementing a storage system that can grow flexibly with your data, simplify management as it scales, and deliver long-term efficiency and quantitative benefits will result in a competitive advantage for your organization.
Given the speed of innovation and business applications in the market today, selecting the right storage infrastructure can be daunting. The following considerations are recommended to be part of your selection criteria. They go beyond point-based questions such as "How many IOPs?" and "Scale up or scale out?" to focus on quantitative requirements that scale with your data, deliver aggressive total cost of ownership (TCO) benefits, and provide real return on investment (ROI) for your business as your deployment matures.
1. Latency
Modern storage solutions tend to compete on IOPs -- how quickly data is delivered. Storage performance is critical, but the ability to deliver data consistently, with sub-millisecond latency, is paramount. It's latency, not IOPs, that raises the quality of every application. Whether you need databases to process more transactions or VDI environments to more responsive, flash is a top consideration here.
2. Simplicity
Data growth is outpacing IT budgets and the staffing resources. As such, organizations must select and implement technologies that enable data centers to scale by allowing all employees to manage more data than they do today. Simplicity is the best means to meet this goal.
Some things to look for: Does the storage array have configuration options or tunable settings? Less is better. Does the array require applications to be reconfigured or optimized? If so, it may not be the right choice. Truly simple storage architectures can inherently adapt to support various workloads without requiring changes to your applications.
3. Space, power, and cooling requirements
On average, global data growth is increasing annually by more than 50%. If you're not feeling the crunch of constrained data center resources today, you will likely be in the near future. Traditional storage systems require significant data center rack space to house them, along with an immense amount of power and cooling to achieve palatable performance. Next-generation storage systems, like those optimized with flash memory, can provide effective data reduction technologies to reduce storage capacity requirements and shrink your overall data footprint, resulting in lower acquisition costs, increased density, and greater efficiency (space, power, and cooling).
Simply put: The more data reduction, the better the savings. Reducing your storage infrastructure's power and cooling requirements will have an immediate and positive impact on both your company's opex and its carbon footprint, and it will net decades of new usability from your data center.
4. Data migrations
Until recently, data migrations on traditional storage arrays have been a labor- and resource-intensive endeavor, typically consuming the first and last 90 (or more) days of a storage array's life span. This effectively reduces its three-year depreciation lifecycle to two and a half years (or less) of actual use. A new breed of storage technology is making painful data migrations a thing of the past.
When evaluating a storage system, look for a truly next-generation platform that enables hardware to be replaced or upgraded without any disruption or reconfiguration of your IT environment. This model goes beyond scale-out or scale-up architectures or slathering a layer of software abstraction across the storage platforms. New hardware in, old hardware out, and the new hardware should not have to match the old hardware. That's investment protection.
5. Automation
"Software-defined" is the new model of data center operations. It's the evolution of server virtualization that spans compute, network, and storage, enabling the dynamic provisioning and configuration of hardware devices and providing new levels of agility to accelerate your business.
To ensure your business and IT organization can keep pace with the digital economy, your storage platform must be programmable, have the ability to fully integrate with your existing infrastructure and applications, and be adaptable to changes in workload or datasets. RESTful APIs are emerging as the de facto standard, and they should be seen as mandatory to provide optimal support for VMware VAAI, Microsoft PowerShell and VSS, and OpenStack.
A high-performing, scalable, and efficient storage system can future proof your data center and business. Be sure to consider the long-term implications when evaluating your next storage purchase, weighing the latency, scalability, manageability, simplicity, TCO, and performance benefits, to ensure your infrastructure can support your business well into the future.
About the Author
You May Also Like