Deciding whether public cloud, hybrid, or private makes the most sense for an organization isn't just a cost decision. For some, moving to a private cloud is a more appealing – and feasible – option.
To put the issue into perspective, a report by 451 Research found that 54% of businesses surveyed had moved all or part of their workloads back from the cloud to local infrastructure.
While research from IDC found that planned and unplanned egress charges account for an average of 6% of an organization's cloud storage costs, this can still be enough to make cloud migration less financially viable.
Painful cloud bills can worsen with older applications and bloated data. Rewriting apps and compressing data is key. This strategy entails changing current apps, or at the very least, a significant portion of the codebase, to better use cloud-based features and the added flexibility that comes with them.
Compounding the impact of cloud costs
How does that impact costs? If an application uses a lot of resources because it processes big data or renders images – or generates a huge number of metadata operations – it may result in higher cloud billing. Refactoring - making changes to existing applications to optimize them for the cloud, as opposed to taking a "lift and shift" approach - requires the most time and resources upfront, but it also has the potential to lower cloud storage costs. Refactoring balances resource use with demand and eliminates wasteful data bloat. This yields a better and longer-lasting ROI compared to applications that aren't cloud-native.
Organizations must figure out what will be the most cost-effective in the long term while also considering their specific use cases and timeline.
Security limitations and performance concerns
While security concerns were the second-most-common reason (23%) for repatriation cited in 451 Research’s Voice of the Enterprise: Datacenters 2021 survey, this is changing. Initial reluctance to adopt the cloud due to security concerns has prompted cloud vendors to improve security, making this less of a stumbling block.
Performance issues with certain workloads can lead companies to pull them off the cloud. Low overall availability, high network latency, and slow application processing are frequently cited as the cause. Other contributing factors to unsatisfactory performance are data lock-in, data transfer bottlenecks, and bugs that can end up in large-scale distributed systems. Some workloads aren't possible in the cloud due to these issues.
Cloud diversification: three examples
IT leaders are relating a strong message: they’re thoughtfully evaluating the best type of cloud infrastructure choice for a given application. Here are three different use cases where we’ve seen this playing out.
1) Streaming content: On-prem storage-as-a-service
Broadcast services provider MediaHub Australia needed to enable customers to efficiently store and immediately retrieve content. To support its new storage-as-a-service offering, the company deployed a scale-out hybrid cloud storage solution and flexible, high-performance servers. Customers can now access their constantly growing archives instantly with reliable, flexible storage – without paying ingress or egress charges, early deletion or embargo penalties, or costs related to storage regions.
2) Edge solutions: Building cloud-like infrastructures at the edge
For many organizations, enterprise data is no longer confined to the data center; rather, it's being generated at the edge and then processed and stored in the cloud. All this data must flow seamlessly between edges, clouds, data centers, etc.
Consider a mining company conducting operations in remote areas where they don't always have reliable communications. A massive amount of data is being collected, such as data on how often big earth movers lift material and how much weight it's lifting. This kind of data is key for things like determining how long a wait between oil changes for these machines. In situations like this, it may not make sense to keep this data in the cloud because connectivity isn't reliable or dependable.
3) Scale-out applications: Hybrid cloud
A central Asian digital startup sought to improve how citizens access government programs online. It needed a modern, scalable hybrid-cloud infrastructure to support rapid growth and expansion. The solution, a scale-out application using enterprise infrastructure and software-defined object storage, has allowed the company to expand its digital services more quickly. This hybrid cloud arrangement retains the necessary on-premises connection that enables the company to perform and deliver at a manageable cost.
Seeking the “best execution venue”
Repatriation isn't an all-or-nothing proposition. 451 Research calls it the "best execution venue:" the idea of putting each workload in the most appropriate location.
More businesses today are adopting a hybrid or multi-cloud strategy to leverage the benefits of different deployment models for different parts of their applications. It makes sense to repatriate workloads and data storage that tend to perform the same tasks repeatedly, like long-term data storage without any specialized data processing.
However, line-of-business applications do well in the cloud because they don’t require high performance. It’s also probably not a good idea to repatriate any workloads that are dependent on specialized cloud-based services. Public clouds are often more cost-effective when advanced IT services are involved, such as with AI, quantum computing, deep analytics, and huge scalability needs.
When it comes to repatriation, important considerations beyond performance include where applications are running, what level of security is needed, how much it will cost to pull back the data, and how long it will take if such a pullback is done. Considering all needs will help to chart a course that’s right for the enterprise.
Chris Harvey is lead architect and director of solutions engineering, EMEA/APAC, for Scality.