Monolithic Scale-Out NAS Is Out Of Gas
The cloudwashing that’s running rampant throughout the storage industry has clearly got to stop. My latest observation is that traditional monolithic scale-out network-attached storage (NAS) vendors are so aggressively selling their offerings for the cloud that their claims are completely out of control. Let’s face it: Traditional monolithic scale-out NAS is the same old NAS. It’s not cloud storage and it’s in no way economical. This old-fashioned kind of NAS is limited. In my opinion, if you purchase traditional monolithic scale-out NAS, you’re just investing in another silo of storage; there is no cloud there.
Some leading global corporations are bypassing the monolithic scale-out NAS hardware boxes altogether when it comes to making their next storage purchases. They’re thinking bigger--interfacing their storage needs directly with a cloud provider’s APIs, or deploying newer and more innovative file storage solutions. APIs control access to the storage--NAS isn’t required for this type of access, and where NAS fits in the cloud is not with big traditional monolithic storage boxes.
Let’s look at one of the worst offenders of cloudwashing, Isilon, whose boxes used to be great for “unstructured data” and now, with the same architecture and software, are apparently ideally suited for “big data,” “analytics” and, of course, the “cloud.” These Isilon “cloud” storage boxes top out at only 144 nodes. But Isilon is approaching it all wrong. The discussion lies in how and where the application, or business, needs to store data. The key for companies is to cut out the middle man, which is old-fashioned monolithic NAS boxes cloudwashed. It’s time for the application to leverage storage. It’s time for more companies to integrate with a cloud provider’s API and more innovative file storage solutions and streamline their overall IT stack.
Another topic that needs to be called out is how Isilon talks about a global namespace. In my opinion, it has more of a single namespace. Its reach is limited to the physical construct of a single building. It doesn’t span cities. It doesn’t span countries, so, to me, that means it’s not global. It’s not designed for millions of users and billions of objects. Think 100,000 nodes for a real-world cloud deployment, not just 144 nodes.
Global namespace means a virtualized layer that sits on top of the customers’ content files, so no matter where you upload or download the file from, it always shows up as the same exact file. When you upload a file, even before it’s replicated to other nodes, it’s available right away by redirecting the user to the node where the file actually exists. This means you can upload a file to any location in a network, and every other user that’s using that namespace will see that same file, regardless of location—building, city or country. In my opinion, that’s a true global namespace.
Recommended For You
There isn’t a standard way of performing an application baselining or profiling. Here is a how-to video with suggestions on how to work through the process.
Hybrid and edge data centers are expanding the role of the traditional data center. This makes DCIM more important today. As with any management software, organizations need to know when it makes sense to keep it on-premise versus going with cloud-based DCIM.
The Interop 2019 speaker discusses ways that enterprises explore DevOps, the skills gap, and the rise of security as code.
Composable infrastructure provides a cloud-like experience for provisioning resources. Understand how it works and how it differs from Infrastructure as Code.
Big changes are happening with data center management as emphasis shifts from core to edge operations. The core is no less important, but the move to the edge opens new challenges as the environment becomes more complex.
Video overview on how to use a portable WAN emulator to validate bandwidth requirements to a backup server.