Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

How to Manage Complex Infrastructures–with Thousands of Workloads–for Optimal Results

Enterprises struggle to optimize their data center environments. Why? Because it’s often – and some would always say – a matter of balancing the need for high performance, high availability, scalability, security, and the cost of supporting mission-critical applications, and trade-offs are inevitable.

But today, this balancing act is more complex than ever, as it involves considering the cloud in all its forms: private, public, and hybrid. A survey conducted by IDG Research found that fewer than 10 percent of executives surveyed believe their data centers were optimized to meet their requirements.

The cloud era begins

In the early days of cloud adoption, enterprises would enthusiastically embrace a “cloud-first” strategy and even migrate some mission-critical applications to the cloud. Doing so was often based on a perceived immediate need:

  • Hosting specific apps and workloads on-premises was becoming too problematic – or would require increasing headcount, which takes a toll on operating expenses. The cloud was perceived as a less expensive option.
  • The cloud approach appeared to offer practically unlimited scalability. It was perceived that migrating to the cloud would eliminate performance issues, as the cloud has infinite scalability.
  • The ROI of doing migrating had become too difficult to calculate. Let’s face it: an ROI calculator is valuable when dealing with a limited number of variables, but in more complex operating environments, ROI can only be measured after the fact – such as at the end of the quarter, when “actual results” become available.

At the same time, the cloud was young, and many enterprises were not about to trust mission-critical workloads to cloud providers, for fear that cloud service models were not yet bulletproofed. And there were other concerns with the cloud, including the belief that multitenancy could jeopardize the security of the cloud customer’s data.

But as cloud models became tested and proven in the early days of AWS, Azure, and Google Cloud, enterprises could take a more structured approach to making the “on-premises vs. cloud” workload placement decision. And in many cases, it has almost become an enterprise mandate that they do. However, that decision hasn’t always proven to work. Research by Enterprise Strategy Group reveals that 57 percent of enterprises have removed at least one or more of their workloads from the cloud or SaaS provider back to on-premises infrastructure.

Workload placement simulation

Just as new hardware and software are often tested in pre-production environments to work out the bugs, enterprises can now use workload analysis and simulation to test their assumptions about workload placement before deploying a workload on-premises or in the cloud. Doing so can avoid costly errors and time-consuming re-architecture of the infrastructure. But this wasn’t always the case. Although simulation had long been used to model applications of all kinds across many industries, simulating infrastructure performance to chart how to deliver optimal performance, availability, scalability, and cost was still rudimentary – even five years ago.

Today, through workload analysis of production application I/O profiles and simulation software, enterprises can create a workload model and test how an application will run in the cloud or on-premises, considering all those factors. They can “try out” an existing workload in a new environment without having to make the real-world move to that environment.

The process inserts two new steps – workload analysis and workload simulation – which effectively spare organizations from the incalculable costs and lost time of making an “educated guess” that turns out to be near disastrous. Workload placement simulation can also become the driving factor and justification for reverse cloud migration – that is, migration from cloud to on-premises data centers.

While simulating applications for workload placement is often done with a view towards optimizing the balance of performance, availability, scalability, and cost, other factors may influence an enterprise's decision. For example, one of those could be vendor lock-in relating to workloads that are running in the cloud, or immature multi-cloud management platforms.

Methodology

Here’s a method for leveraging workload analysis and simulation to test for the results that are most important to your organization. The process that follows illustrates how an organization can determine its cloud migration readiness (CMR). The process is straightforward, and the simulation tool takes into account the hundreds or thousands of workloads – and the varied requirements of each – on the infrastructure.

The simulation engine is similar to those used across a range of industries, in which there are simply too many variables for humans to correlate and resolve to make the right decisions for implementing change. With workload analysis and simulation working on behalf of infrastructure planners and administrators, they can more easily face the truth that IT infrastructure will never become less complex. We just need to keep ahead of them.