Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Calling It: NUMA Will Be The Shizzle In Two Years: Page 2 of 2

But let's look at the potential and what it means for server-provisioning and dynamic resource allocation. Wouldn't it be great to scale up existing servers dynamically on demand such as adding CPU's, RAM, and IO without having to take the server down? What would that do to your ability to respond to peaks and valleys? Gone are the days when you add processing power by provisioning a bigger server or adding a new server to an existing pool. If your company interacts with customers and you have a sale or promotion, you know you will have a temporary need for more processing to handle the load. Sure, you could use a cloud service, but that has its own issues. If you want to do an analysis over a large data set, do  you want to wait until you are old and gray, or would you rather grab more RAM or IO? Wouldn't it be useful to prioritize computing power to your most pressing business need? I'm not suggesting that hardware allocations are going to be changing minute by minute- -they don't today- -but your processing needs to change sometimes predictably and sometimes not.

What about the impact on availability? Anyone who has recovered from a server failure in minutes compared to hours by spinning up a VM understands the power of hypervisor virtualization. But take that one step further to hardware virtualization. If you can add and remove hardware dynamically on running systems, it's not a reach to imagine being able to distribute an entire server- -OS, application, data, the whole shebang- - or even a hypervisor, over many server blades. Lose one blade, and you just lose some of the processing resources, but the server keeps running. Recovery is as quick as adding more hardware. Do that and you might be chuckling at peers who are proud of their 99.999 percent reliability (5 minutes of downtime per year) and you can boast 100 percent. You know downtime is never just five minutes and it never happens when no one is looking.

I think it will take a few years to get these questions answered. I expect that hypervisor vendors like VMware, Microsoft, Citrix and the Linux KVM developers will be working on this. I know there is work within the Linux community on NUMA, but I don't know the extent. Hardware vendors have NUMA systems running. HP's recently announced Superdome 2  is the latest case. Considering that the new Integrity blades can run in a Superdome 2 chassis and HP c-class chassis, I'd imagine HP will explore, if they haven't already, the efficacy of putting NUMA into the c-class systems. I don't have any inside knowledge about HP's plans, but it's what I'd do. Regardless, Superdome is already in the market as is 3Leaf, IBM and others.

I think it's got legs and I think we are just starting to see it emerge from research and HPC silos. Besides, it's fun to say. NUMA. Go ahead, try it.