Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

How to Stay Up and Running When the Cloud Goes Down – and Avoid Lock-ins

cloud outages
(Credit: Elen / Alamy Stock Photo)

The growth and scale of cloud computing since its early days after the turn of the millennium have been utterly astonishing, and we undeniably now inhabit a world powered by the cloud. It’s easy enough to understand how this happened at businesses big and small because the benefits speak for themselves: cutting down on infrastructure costs, increasing flexibility, availability, collaboration, and recoverability to get your data back. Yet, rarely mentioned in the sales brochures is what can happen when things go wrong. The concentration of public clouds falling into a dwindling number of hyperscale players can pose a problem.

In the post-pandemic era, where every facet of our life is more digital than ever, downtime is unacceptable – organizations that want to remain competitive must be accessible 24/7. According to the Uptime Institute, there’s been some improvement in managing outages and downtime. Yet, they remain, unfortunately, a fact of life, and when incidents occur in the real world, they impact the virtual one too. As the world moves increasingly digital, these outages are more costly than ever. If businesses can’t access their infrastructure or data in the cloud, they can take a sizable hit to the bottom line.

These factors are significantly worse if a company relies on just one vendor. According to a recent Google Cloud forecast, only 26% of organizations reported using multiple clouds in 2022 – and while this increased a percentage point compared to 21% in the previous year, the vast majority of businesses are not. Hybrid cloud also increased from 25% to 42.5% – an impressive jump, but still, a way to go from most users.

Relying on one provider anywhere is a risk, so it’s not just the cloud that’s the issue. To achieve genuine redundancy and more robust business continuity, organizations must focus on creating a ‘Multi-X’ strategy: multi-cloud, multi-location, and multi-provider.

Achieving redundancy and more with multi-cloud

The cloud is indispensable among all industries for speed, performance, and building modern, agile, scalable, and flexible environments that can only be achieved with the cloud. However, by storing data in just one cloud, organizations risk ‘cloud concentration’ – and this dependency can lead to a single point of failure, with profound knock-on effects like losing access to business-critical systems.

Multi-cloud is the logical response to combat this dependency. Some sectors, such as the financial services industry, made using multiple providers mandatory to safeguard business continuity and economic stability. But in addition to better business continuity, every cloud has advantages. For example, being able to pick between clouds for individual use cases means organizations can ensure they have the best-in-class capabilities for their specific requirements.

These can be further strengthened with interconnection services such as data centers and carrier-neutral cloud exchanges, where direct cloud connectivity is supported by a geographically distributed infrastructure that promotes the greatest level of resiliency. Connecting to these using APIs simplifies the process, and some providers offer portals for booking and scaling the connectivity and synchronization of data between clouds, minimizing risk even more.

Achieving resilience with ‘geo-redundant’ connectivity

In the same way that using a single cloud provider can leave your data vulnerable to connectivity outages, operating from just one physical location can also result in a single point of failure. To mitigate the risk of localized disruptions, such as a flood or a power blackout, it’s critical to spread data centers across geographies.

However, it’s also essential to ensure that there are redundant network pathways between company headquarters, the various branches, manufacturing plants, and data centers that follow distinct routes and do not overlap. Suppose there is only a single circuit between a company location and a data center. In that case, it’s just as vulnerable to network disruption as relying on a single site or provider – for instance, if roadworks accidentally cut fiber cabling, knocking the route offline.

Reducing risk with multiple providers

The ability to pick between providers isn’t only useful with cloud provisioning but desirable across the whole infrastructure stack: By spreading risk between multiple network operators, carriers, and data centers, customers can boost resilience and rid themselves of another potential single point of failure.

Additionally, no single provider can hold your data hostage – one of the significant perils of vendor lock-in, giving the organization the freedom to change providers as and when it’s strategically sound. By taking this vendor, carrier, and data center neutral approach to infrastructure through a multi-provider strategy, it’s possible to ensure uninterrupted access to critical data, no matter the incidents, and afford much greater flexibility.

In summary, a ‘Multi-X’ strategy designed to offer choices on varying cloud usage and provisioning, data centers and storage, location, and network routing gives businesses the flexibility and choice that opting for one vendor will rarely, if ever, provide. Even as the rate of outages in general improves and, according to Gartner data, are less severe than they used to be – it’s never wise to put all your eggs in one basket.

Dr. Thomas King is Chief Technology Officer (CTO) at DE-CIX.

Related articles: