Virtualization Meets The Real World

Virtualization promises businesses a whole host of benefits but managing the technology presents IT with some unique challenges. BMC's David Wagner discusses what those challenges are and how BMC's

May 2, 2006

10 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Virtualization is nothing new. Companies having been using the technology for decades in mainframe and proprietary environments but a confluence of cost factors and recent technology developments are compelling businesses to put the technology to work in other environments. David Wagner, BMC director of solution marketing for capacity Management and provisioning recently spoke with Systems Management Pipeline Editor Amy Larsen DeCarlo about server virtualization in the enterprise today and some of the management challenges associated with it.

ALD: Can you set the stage and tell me some of the reasons why there is so much more development attention and customer interest in virtualization now?

DW: What I think is really going on here is the technology enablement is happening at the same time as companies are under incredible business and IT cost pressures. You put all of this together and there is this perfect storm forcing people to really look at doing IT in a different way. Proprietary Unixes like AIX, Solaris, HP-UX implemented virtualization 3 or 4 years ago and it was adopted very slowly and gradually for high-end architectures, very expensive hardware typically used for large-scale database applications. Virtualization in the distributed world got its start there but what really has made it explode were two major phenomena.

One was VMWare being first out of the shoot to create, and I don't mean this in the pejorative sense, really a down-market version of virtualization that made it accessible from a cost standpoint. What we saw was in the last two or three years people playing with virtualization on industry-standard servers primarily in their development organizations. Development groups run more applications on the same set of hardware at the same time which of course is a good thing from a cost standpoint.

In the last two years especially Intel and AMD in the x86 market continued the acceleration of ever-more compute power on the chip. They moved to dual core architectures and now they are putting hooks for virtualization technology and hypervisors directly in the hardware itself.The other major driver was these blade and rack mount servers with their ever-increasing power are using more and more things like electricity, floor space, and cooling. All of these ancillary costs are turning out to be substantially higher than the cost of the server itself.All of these things came together in the last year or so. Frankly the virtualization technology in the minds of most operations directors and application developers is proven enough that it is now safe and cool to deploy. I think we have now crossed over that technology chasm and that is why everyone is talking about it and everyone is doing it.

ALD: For all of the advantages of virtualization, it is not necessarily the simplest thing to manage. Are IT staffs aware of the challenges they are going to face with respect to management and can you outline what those challenges are?

DW: Sure. As with any organization, I think there is going to be a wide spectrum here. There are IT organizations that are very aware of the challenges of the management and there are those at the other end of the spectrum that think of virtualization as just another platform to manage. I think the ones that think of that way are doing themselves a disservice because there are some unique risks associated with virtualized environments that don't exist in the physical environment or are at least not as significant.

I would classify these risks in really two main categories. One is all the risks associated with change. The whole reason there are risks associated with change is because when you make changes you need to know, what is the current state of something is so that if and when problems do occur you can revert back to the point before the change is made or you can inform the right people so that they can use tools to diagnose the problem based on the knowledge of what the current configuration is. The unique thing about virtualized environments is the environment's configuration itself is changing over time. So in virtual environments, you have applications that might be running on one physical machine one day and another one another day - or one VM here or VMs are brought in and out service.

This is a whole new paradigm and it creates a whole new set of availability risks and downstream management challenges.

The other major bucket associated with virtualization challenges is one that simply does not exist in isolated physical environments - capacity risk. If you previously had an environment where you had two different applications running on two different physical servers you could be pretty well certain they weren't going to cause problems for each other from a performance standpoint because they had their own resources. If one application required 30 percent of CPU at 9:30 in the morning to meet response time guarantees it could because it had it own dedicated physical box. And if the other one needed 40 percent at 9:30 in the morning, that was fine. But if you combine them both and their both running on a shared hardware platform in two separate virtual machines if they both need access to the same physical resource at the same time, by definition one of them is going to have to wait so that is a new risk that didn't exist previously.So previously the capacity risk of industry standard architectures was really a cost issue. You just threw more hardware at it and knew that risk was solved. But throwing more hardware at it here doesn't solve the problem because now you are making things share resources that didn't use to so now you need to plan for that.

ALD: What do you see as BMC's primary differentiators in the virtualization management space?

DW: There are a lot of solutions that can automate the technology. And there are a whole other set that can help with process management - discovering your environment and keeping the CMDB (Configuration Management Database) up to date. Where we see ourselves as being unique is we are the only management vendor that offers an integrated approach to automate both the technology and the process management sides of the equation.

ALD: Can you talk a little bit about what BMC's management capabilities are with respect to virtualization, and how you can help IT make sure they are getting the most out of their virtualized environment?

DW: Last fall we outlined a strategy by which we would manage an optimized virtualized environment. Since then time, we've actually fleshed that out with some new capabilities.

The approach we are taking really says that in order for customers to really take advantage of virtualized environments in the most efficient way and with the least risk possible, they really need to start with a baseline understanding of what is their current environment look like. The key point here is they need to understand what servers they have; what applications are running on those servers; and what business services those applications are supporting in terms of overall end-to-end business services outside of IT. And they need to understand things like what users are associated with server environments. Finally, they need know how busy are those servers are.Our solutions have the capability to discover the environment including the physical environment, the business services, the applications, and the users as well as to baseline the performance of their physical and then virtual server environment.We then populate that into a CMDB which can then be used by any number of downstream management solutions that BMC brings to the table. So at that point, what customers have is an understanding of how busy their environment is and that can lead to some fairly dramatic return on investment very quickly. We've had customers literally find out they had under-utilized physical and virtual resources so they didn't have to go out and buy new applications; they simply repurposed the ones they already had.

The next phase is the analysis and planning phase and that is where customers use our solutions in the area of asset management, and in the area of performance analysis and capacity planning to go in and really say: "Okay if this is what we've got and these are the business changes we are considering what resources would we need to support these changes and what would the best mix of resources actually be. If we were going to take a bunch of applications that were running in a physical environment and stack them in a virtual environment, what is the performance going to be. How much hardware will need and when will we need it?"Our solutions will output a plan that says these are the hardware resources you need, physical and virtual. This is how many you need. This is when you will need them to support which services.

Then the third step really builds on that first step of discovery. That first step of discovery really begins you on a change management lifecycle because you've got to understand what you have so you can manage the changes. It is creating a change request. It is making sure that any changes are authorized either by policy or manually, depending upon the organization's change model. Then we use our automated capabilities to automatically deploy the change.

So, for example, if the change is rolling out a new version of an application across multiple physical or virtual servers, we can completely automate that. We track it to make sure that the change itself occurred as authorized and reconcile any differences. And the whole time we are making those changes we reflect them back into the CMDB, that single source of truth, so that it is accurate and kept up to date over time.Then the final capability that we bring to the market is really for the most advanced customers, at least today and that is the customers that actually want to dynamically optimize their virtual and physical configurations in real time. So rather than wanting to do things like move from physical to virtual and then we are going to manage it, they actually want to dynamically optimize their environment.

So we have solutions that will allow our customers to create policies by which they can allocate physical and virtual resources to business services in real time and provision them in real time or do things like designate one application as being more important than another. Let's say there is a spike in demand for one of those applications then we can actually take resources away from one of those applications, the less important one and bring them online to the more important one in real time.

ALD: How important is it for the business side to be involved in this?DW: There is a process involved here. We have a series of tools in the discovery phase because there are customers trying to solve different portions of that. We have foundation tools that will discover just the physical environment. We have topology tools that will discover the inter-server relationships of all the work that is going on in support of a different application. What we are discovering is IT's view of a business service. Then obviously you would want to reconcile the view with the business side's view because IT's view might not directly map to the business' view.

There is some manual reconciliation that should occur. We also have discovery capabilities that can go in and discover the actual configuration information that is running on all of those servers - the applications, the patch levels, the revision levels, all that stuff is very straightforward. And for discovering the users and identity of the users, we have solutions for that as well.ALD: What role do partnerships play in BMC's management solutions for virtualized environments?DW: We've got a fairly broad set of partners already and I don't want to pre-announce something but we are continuing to build out an ecosystem. These partners are not limited to just technology partners; they include application providers and solution providers as well.

ALD: Can you give me an idea of BMC's future direction in this space?DW: We are going to step up the level of process and technology integration across our solutions portfolio. So what you will see is a geometric increase in integrated workflows that span our technology and process solutions. So where today we might have 30 or 40 integrations you would expect to see more. You can expect to see more and more BMC solutions bolted into workflows like I described.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights