In my last blog post, I discussed how storage I/O decision making involves understanding an application’s PACE (Performance, Availability, Capacity, and Economic) characteristics, as well as knowing the capabilities of various technologies. Ideally, this also means aligning the various technologies (hardware, software, services) to your application vs. fitting your application to meet the capabilities or limitations of a particular technology.
Assuming you have an understanding of your various application PACE needs and requirements, what about the technology? Are you looking at technology that fits, meets or exceeds what your environment and applications need today as well as tomorrow? Or are you playing buzzword bingo by shopping around for different technologies and services that you can then align your applications to?
Here are some of the popular buzzwords used in the storage industry today that storage buyers should tread carefully around:
All-flash array (AFAs). One of the more popular industry buzzwords, AFAs evolved from prior generations of all-SSD arrays (ASAs) that were based on DRAM memory with either HDD or NAND (or both) persistent backing storage combined with some form of battery backup power. Similar to the ASAs that have been around for decades finding use as a tier of storage, or in some situations as the entire storage pool for specific environments and applications, today's AFAs leverage plays from the SSD sales playbook. One of those time-tested plays -- beside boosting bandwidth, IOPS or reducing latency for applications -- is to prolong the life of a server by getting more useful work out of it.
Non-volatile memory. NVM, including the variations of SSD from DRAM to NAND flash to emerging technologies such as 3D XPoint, are in your future if you’re not already dealing with them. Questions include when, where, why, with what, how much and how to deploy. Look at NVM such as flash SSD in the context of productivity -- how much work can you get done per dollar spent -- as opposed to how much space capacity does your dollar buy you.
Big data has been an IT industry buzzword for a few years now, usually tied to Hadoop analytics, and in some circles, positioned as for only the data scientist. However, in a pragmatic world, there is big data and very big data, not to mention fast big data that applies to all of us. There are also buzzwords such as data lakes, which are just large pools of data stored on collections of storage systems that support various access such as HDFS for Hadoop or NFS for files.
Object storage. Object storage has been around for over a decade, with underlying object architectures even longer. What is different today is that there are more object access methods including OpenStack Swift and Amazon Web Services (AWS) Simple Storage Service (S3), VMware Virtual Volumes (VVOLs), SOAP, and JSON. Object storage, in general, is a good fit for bulk storage needs including scratch, import/export, logs, video, audio, image or other streaming data, snapshots, backups and archives. On the other hand, object storage may not be the best fit for rapidly changing data such as a key value store, metadata repository that has many updates, or SQL and NoSQL databases.
Some tips and recommendations for navigating these storage buzzwords:
- Context matters when it comes to looking at technologies, tools, and techniques.
- Look for technologies that work for you vs. you having to work for the technology.
- Software does not eliminate vendor lock-in; it just moves the lock-in focus from hardware.
What about converged infrastructure (CI), hyper-converged infrastructure (HCI), cluster-in-box (CiB), and software-defined storage? No worries, I’ll discuss those in future blog posts. For now, when somebody is rattling off buzzwords that may or may not matter to your environment and applications, at some point simply call out “Bingo!” Then have them explain why and what is applicable to your environment and applications vs. what’s fun to talk about.