STORAGE

  • 09/24/2014
    8:06 AM
  • Rating: 
    0 votes
    +
    Vote up!
    -
    Vote down!

Storage Intelligence Moves Up The Stack

New products for virtual data centers move storage intelligence closer to the application for increased performance.

In today's world, more servers are provisioned through virtual machines than through physical hardware. The virtual data center has become the primary platform on which workloads are deployed. Typically this platform uses different components from different vendors, each with its own language, management structures, and algorithms for optimizing performance. Yet we expect this collection of disparate platforms to work seamlessly, providing us with deterministic performance levels.

Aggregating these incongruent systems into a cohesive solution using software is the goal of the software-defined data center (SDDC). This model allows for policy-driven management that aligns resources with application demand. However, integrating all these components into a single element that's manageable from a single point with universal controls is a challenge.

Instead of waiting for a universal language to stitch everything together, one trend that is emerging in the industry is the software control plane. Intelligence is moving to the perimeter of the architecture, and resources are being commoditized. For example, the network virtualization platform VMware NSX utilizes common network components to provide bandwidth and connectivity while leveraging the hypervisor to provide controls and policy-based solutions at the virtual infrastructure level.

Moving intelligence to where it matters is also occurring in the storage stack -- for example, in VMware's Virtual SAN and PernixData's FVP. Virtual SAN provides an end-to-end solution, replacing the whole storage infrastructure with a policy-based architecture for storage performance and capacity in the hypervisor. PernixData FVP aligns with NSX by decoupling performance from capacity. It leverages the storage array to provide capacity and data services while using server resources to provide storage performance and intelligence close to the application.

What these products have in common is tight integration with the hypervisor kernel. Because the hypervisor kernel is rich with information -- including a collection of multiple tightly knit resource schedulers -- it is the perfect place to introduce policy-based management engines. The hypervisor becomes a single control plane that manages both the resource and the demand. It provides a single construct to automate instructions in a single language with a model for granular levels of quality of service for applications.

Moving storage resources directly into the compute layer allows not only for greater control, but also for more detailed insight into resource consumption and distribution. Traditional storage designs for virtualized environments rely on large catchall disk pools to satisfy incoming resources, typically providing sparse information about resource distribution. This model is relegated to the past. Moving the intelligence up the stack is the natural move.

Controlling resources as close to the application as possible provides the ability to scale out swiftly and actively. Combining the powers of software and the hypervisor structure can create an application-centric platform that allows you to control your environment and contribute to business goals.


Comments

Storage intelligence

Frank, thanks for this post. It certainly makes sense from a conceptual standpoint, but I'm having a little trouble picturing it in action. Could you give us an example of how performance could improve from storage intelligence residing closer to the application?

Re: Storage intelligence

My interpretation of this article was that a person's personal workflow could be significantly improved by having direct access to the information and resources that would empower him/her to implement changes.

Re: Storage Intelligence Moves Up The Stack

While I'll agree with Susan that the specifics are a little heady unless you, personally, are someone in charge of virtualization, the concepts here make a whole lot of sense to me. In the big scheme of things, virtualization is still a nascent technology. No doubt it has come into it's own and is here to stay, but it makes sense that we may have jumped the gun in a few places along the way, and the standards and best practices we've put in place are not always what's ideal. Things move fast in this space, and it's worth taking a look ahead to make sure you don't get left behind.

The storage needing to be close to the kernel angle makes a lot of sense. It's closer to everything else, it's easier to manage policies and enact changes, and not to mention, you can get access to that data faster! What's not to like? The security implications also seem good (I suppose that ties back into policy). However, your access to all this seems to depend on what vendors you're locked in with, and what you have in place now. Is that true?