3 Hidden Public Cloud Costs and How to Avoid Them

Hybrid or multi-cloud strategies offer flexibility and might be the right choice to address hidden public cloud costs for your organization’s specific needs.

Pete Brey

May 30, 2019

5 Min Read
3 Hidden Public Cloud Costs and How to Avoid Them

According to Gartner, worldwide public cloud revenue is expected to grow 17.3 percent this year, representing a whopping $206.2 billion. That’s up from just over $175 billion last year.

Clearly, IT organizations are ready to fire up their purchase orders, but before you commit, remember the old saying: “there’s no free lunch.” Hidden costs are an unfortunate byproduct of the public cloud life. Understand what you’re getting into upfront so you can decide when using a public cloud provider is cost effective and appropriate, or when it might be better to go a different route, such as a hybrid or multi-cloud approach.

Ingress costs

Often, public cloud providers’ ingress costs--the initial price you to pay to sign up--are either fairly low or non-existent. In some cases, the cloud provider will even help you transport your data for nothing.

The issue here is not so much cost as it is time. Transporting massive petabytes of data into a public cloud service can take weeks, if not months, during which time critical data might be unavailable. You could send it over a private network, but there’s a time cost to that, too.

Transactional costs

Most public cloud providers will charge a nominal fee every time you attempt to access your data. These fees are almost infinitesimal, sometimes averaging pennies per hour, which cloud providers hope to make up in high volume.

Things can get pretty pricey when you’re running thousands of analytics jobs. It’s easy for a CIO looking for cost savings to simply say “let’s put everything we have in the public cloud” when everything you have is fairly minimal, but as data use rises, so do transactional costs. In that case, using the public cloud exclusively for everything might not be the wisest long-term investment.

Egress costs

Stop me if you’ve heard this one before: “Our boss asked us to move all of our data to one public cloud provider. Now, we’re trying to move it to another, but we have to rewrite all of our scripts. It’s a huge pain.”

Moving your data from one provider to another can be a huge pain. This act of egress can result in significant costs, creating a form of cloud provider lock-in that can be difficult to break. Teams need to recreate new scripts, which translates to additional time and money and lost productivity. You’re recreating not just the wheel but a car’s entire engine and chassis.

A hybrid solution

You might be wondering if the public cloud is worth the cost. In many cases, the answer is “yes,” but it depends on your goals.

For better agility, investing in the public cloud is a wise move. Likewise, if you’re a smaller business, you will probably incur fewer transactional costs because you will likely have less data than a larger corporation.

But the answer might be “yes...and no.” You may choose to adopt hybrid and multi-cloud strategies, keeping some data on-premises or split up in different clouds.

A hybrid and multi-cloud strategy provides options. Companies can enjoy the extra tools and capabilities offered by public clouds while keeping costs under control. They don't have to worry about ingress costs, and transactional costs can be minimized. They can also greatly reduce or even eliminate egress costs since they likely do not have to perform wholesale data migrations between different providers and can just delete their public cloud data if they have an on-premise backup.

Moving data within a hybrid environment

Moving applications between clouds can present its own challenges. Every public cloud provider uses its own cloud storage protocols. Migrating data between these disparate and disconnected protocols can result in egress costs--just what you're trying to avoid.

You need to be able to federate your data so that it can be used across distinct protocols with minimal effort and cost. This can be accomplished by aggregating native storage from different cloud providers into a storage repository that uses a single endpoint to manage all of your organization’s clouds. Instead of manually pulling data out of one and migrating it to another, you can automatically migrate data and applications to and from the appropriate clouds.

When combined with container-native storage--highly portable object storage for containerized applications--you can easily transport all of your applications and their associated data between different providers. Furthermore, developers can automatically provision this storage without having to bother their data managers, saving everyone a lot of time and headaches and automatically boosting the performance of their teams.

Call it virtualization of object storage, or protocol translation. Whatever the name, it can all be done without breaking a sweat (or the bank). The result is the optimization of your hybrid or multi-cloud environments and the elimination of the hidden time and costs associated with public cloud storage. 


About the Author(s)

Pete Brey

Pete Brey is marketing manager of exascale storage at Red Hat, including Red Hat Ceph Storage and Red Hat data analytics infrastructure solution. Pete brings a broad range of industry experience in object storage and network attached storage in roles ranging from marketing, product management, and engineering. In his most recent role at NetApp, Pete led product marketing for object storage, OpenStack storage, and OpenShift and Kubernetes persistent storage orchestration. Prior to NetApp, Pete led the development and marketing of OpenStack object storage at Hewlett Packard Enterprise. He also led a ground-breaking project to deliver a Ceph with NVME storage solution and managed a team of engineers to deliver scale-out clustered NAS and WAN acceleration products.

Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like

More Insights