Moving Mission-Critical Applications To The Cloud

Here are three steps enterprises and private clouds need to take to meet the availability requirements of critical business applications.

Jason Andersen

January 15, 2015

4 Min Read
NetworkComputing logo in a gray background | NetworkComputing

Businesses are increasingly looking for the rapid provisioning, elasticity, self-service capabilities, and on-demand advantages of cloud applications and services. But the most recent wave of virtualization has left mission-critical apps relatively untouched, with most of these apps still hosted on bare-metal systems deep in the data center. How can businesses bridge the gap between these two worlds?

The reality is that moving mission-critical apps to the cloud is not trivial. The complexities of meeting mission-critical demands for security and availability in a cloud environment often involve technologies that are still maturing. That's why most enterprise cloud strategies today are limited to noncritical apps.

But this is beginning to change. As the cloud matures, new approaches are emerging that have the potential to meet the demands of mission-critical apps. And let's face it: It's only a matter of time before private clouds extend their reach to virtually every business application, so the potential advantages in cost, efficiency, and agility are just too great to ignore.

So what do private clouds have to become to meet the demands of mission-critical apps in shared or virtualized environments? How can enterprises and private cloud service providers get from "here" to "there" -- and get there before their competition?

First off, they need to take an application-centric rather than an infrastructure-centric approach to cloud services. That means building cloud environments that go beyond the basics of "commodity services" to delivering robust capabilities that meet the needs of mission-critical apps. I believe there are three key steps for making this leap successfully.

1. Take a new approach to availability
For mission-critical apps, high availability is non-negotiable. Indeed, spotty availability has been a key barrier to enterprises moving more apps to the cloud. The recent, well-publicized outage at Microsoft's Azure cloud service, as well as others, have underscored the challenges associated with making public and private clouds highly available. But traditional hardware-based approaches to fault tolerance don't map well to the fluid, elastic nature of cloud environments.

Here is where the concept of software-defined availability can help. With this approach, failure prevention and recovery are abstracted from the application, enabling mainframe-like availability levels using commodity hardware in a dynamic cloud environment. Best of all, this can be achieved without re-engineering the application, reducing the time, expense, and risk associated with moving mission-critical apps to the cloud. Availability can be deployed as a cloud service, dramatically simplifying the task of meeting mission-critical requirements, and evolving as requirements change.

2. Develop strong orchestration capabilities
Another key success factor for mission-critical apps is having the ability to orchestrate cloud resources. This is essential to ensure every bit of data moving around in the cloud ends up exactly where it's supposed to be, at exactly the right time.

Let's face it: Not all mission-critical applications have the same requirements. They may only be mission-critical at certain times of the day, week, month, or year. Consider a financial services application at quarter end or a payment processing application during the holiday shopping season. Enterprises must be able to scale up and scale down application availability requirements depending on their need at any point in time.

This means shifting from a mindset of "application customization" to "application configuration," with that configuration occurring dynamically. Again, this approach eliminates or minimizes the need to rewrite applications.

Consider fault tolerance. Maintaining fault tolerance in a traditional on-premises data center usually means deploying the application on hardware that is fully redundant all the time, forever. That gets costly fast. With dynamic orchestration in the cloud, it's possible to deploy an application with fault tolerance only when that level of maximum availability is needed.

Then, when fault tolerance isn't required, the application can be throttled back to a non-FT infrastructure seamlessly with no interruption of service. This saves money, optimizes utilization of computing resources, and provides the availability required when it's most critical.

3. Build on open source technologies
Reducing costs is typically one of the most compelling reasons to move mission-critical apps to the cloud. Using proprietary technologies with high license costs, support fees, or other ongoing expenses undermines that advantage.

That's why it's critical to leverage open technologies and architectures that don't take a percentage off the top. Technologies like OpenStack, Linux, and KVM (Kernel-based Virtual Machine) reduce the "technology tax" on enterprises, while providing the flexibility to build cloud environments using innovative, best-of-breed solutions for software-defined availability and dynamic orchestration to meet the needs of mission-critical apps. The ecosystem of vendors able to solve those challenges is now well established and growing every day.

Closing the gap
Enterprises and cloud service providers that embrace these strategies are already a step ahead in their journey toward achieving the benefits of mission-critical applications in the cloud.

Success in this new world depends on employing new cloud technologies that meet the demanding requirements of their most critical business applications, while effectively managing risk and reducing cost and complexity. The latest cloud solutions and open source technologies are closing this gap and making it possible to take that leap.

About the Author

Jason Andersen

Vice President, Business Line Management, Stratus TechnologiesJason Andersen is Vice President of Business Line Management and is responsible for setting the product road maps and go to market strategies for Stratus products and services. Jason has a deep understanding of both on-premise and cloud based infrastructure for the industrial internet of things (IIoT) and has been responsible for the successful market delivery of products and services for almost 20 years. Prior to joining Stratus in 2013, Jason was director of product line management at Red Hat. In this role, he was responsible for the go to market strategy, product introductions and launches, as well as product marketing for the JBoss application products. Jason also previously held product management positions at Red Hat and IBM Software Group.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights